It's even more amazing, perhaps, that our existence is quietly being transformed by a technology that many of us barely understand, if at all — something so complex that even scientists have a tricky time explaining it.
"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University. "I say 'thought,' because nobody is really quite sure what intelligence is."
Honavar describes two main categories of intelligence. There's narrow intelligence, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. General intelligence, in contrast, is a more human-like ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."
According to Honavar, there are two key pieces to AI. One of them is the engineering part — that is, building tools that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."
AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.
And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."
"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.
AI works by combining large amounts of data with intelligent algorithms — series of instructions — that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.
In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.
- Machine learning automates analytical model building, to find hidden insights in data without being programmed to look for something in particular or draw a certain conclusion.
- Neural networks imitate the brain's array of interconnected neurons, and relay information between various units to find connections and derive meaning from data.
- Deep learning utilizes really big neural networks and a lot of computing power to find complex patterns in data, for applications such as image and speech recognition.
- Cognitive computing is about creating a "natural, human-like interaction," as SAS puts it, including using the ability to interpret speech and respond to it.
- Computer vision employs pattern recognition and deep learning to understand the content of pictures and videos, and to enable machines to use real-time images to make sense of what's around them.
- Natural language processing involves analyzing and understanding human language and responding to it.