How Does AI Work?

By: Patrick J. Kiger  | 
Artificial intelligence
Artificial intelligence (AI) is an interdisciplinary science concerned with building smart machines capable of performing tasks that typically require human thought. The implications will change virtually every aspect of our world. Needpix

How does AI work? It helps to start at the beginning. Back in October 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.

Advertisement

Turing thought that they could. Moreover, he believed, it was possible to create software for a digital computer that enabled it to observe its environment and to learn new things, from playing chess to understanding and speaking a human language. And he thought machines eventually could develop the ability to do that on their own, without human guidance. "We may hope that machines will eventually compete with men in all purely intellectual fields," he predicted.

Nearly 70 years later, Turing's seemingly outlandish vision has become a reality, thanks to monumental advancements in the field of computer science and AI research. Artificial intelligence, commonly referred to as AI, gives machines the ability to learn from experience and perform cognitive tasks, the sort of stuff that was once thought to be reserved for human intelligence.

AI is rapidly spreading throughout civilization, where it has the promise of doing everything from enabling self driving cars to navigate the streets to making more accurate hurricane forecasts. On an everyday level, AI figures out what ads to show you on the web, and powers those friendly chatbots that pop up when you visit an e-commerce website to answer your questions and provide customer service. And AI-powered personal assistants in voice-activated smart home devices perform myriad tasks, from controlling our TVs and doorbells to answering trivia questions and helping us find our favorite songs.

But we're just getting started with it. As AI technology grows more sophisticated and capable, it's expected to massively boost the world's economy, creating about $13 trillion worth of additional activity by 2030, according to a McKinsey Global Institute forecast.

"AI is still early in adoption, but adoption is accelerating and it is being used across all industries," says Sarah Gates, an analytics platform strategist at SAS, a global software and services firm that focuses upon turning data into intelligence for clients.

Advertisement

How Artificial Intelligence Works

It's even more amazing, perhaps, that our existence is quietly being transformed by deep learning algorithms that many of us barely understand, if at all — something so complex that even scientists have a tricky time explaining it.

"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University. "I say 'thought,' because nobody is really quite sure what intelligence is."

Advertisement

Honavar describes two main categories of intelligence. There's narrow AI, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. Artificial general intelligence, in contrast, describes much more human like thinking processes, like the ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."

According to Honavar, there are two key pieces to AI models. One of them is the engineering part — that is, building a computer program and computer systems that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."

AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.

And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."

"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.

AI systems work by combining large amounts of data with intelligent algorithms — series of instructions — that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.

In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.

  • Machine learning automates analytical model building, to find hidden insights in data without being programmed to look for something in particular or draw a certain conclusion.
  • Artificial neural networks imitate the brain's array of interconnected neurons, and relay information between various units to find connections and derive meaning from data.
  • Deep learning utilizes really big neural networks and a lot of computing power to find complex patterns in data, for applications such as image and speech recognition.
  • Cognitive computing is about creating a "natural, human-like interaction," as SAS puts it, including using the ability to interpret speech and respond to it.
  • Computer vision employs pattern recognition and deep learning to understand the content of pictures and videos, and to enable machines to use real-time images to make sense of what's around them.
  • Natural language processing involves analyzing and understanding human language and responding to it.

Advertisement

Decades of Research

The concept of AI dates back to the 1940s, and the term "artificial intelligence" was introduced at a 1956 conference at Dartmouth College. Over the next two decades, researchers developed programs that played games and did simple pattern recognition and machine learning. Cornell University scientist Frank Rosenblatt developed the Perceptron, the first artificial neural network, which ran on a 5-ton (4.5-metric ton), room-sized IBM computer that was fed punch cards.

But it wasn't until the mid-1980s that a second wave of more complex, deep neural networks were developed to tackle higher-level tasks, according to Honavar. In the early 1990s, another breakthrough enabled AI to generalize beyond the training experience.

Advertisement

In the 1990s and 2000s, other technological innovations — the web and increasingly powerful computers — helped accelerate the development of AI. "With the advent of the web, large amounts of data became available in digital form," Honavar says. "Genome sequencing and other projects started generating massive amounts of training data, and advances in computing made it possible to store and access this data. We could train the machines to do more complex tasks. You couldn't have had a deep learning model 30 years ago, because you didn't have the data and the computing power."

AI and Robotics

AI systems are different from, but related to, robotics, in which machines sense their environment, perform calculations and do physical tasks either by themselves or under the direction of people, from factory work and cooking to landing on other planets. Honavar says that the two fields intersect in many ways.

"You can imagine robotics without much intelligence, purely mechanical devices like automated looms," Honavar says. "There are examples of robots that are not intelligent in a significant way." Conversely, there's robotics where intelligence is an integral part, such as guiding an autonomous vehicle around streets full of human-driven cars and pedestrians.

Advertisement

"It's a reasonable argument that to realize general intelligence, you would need robotics to some degree, because interaction with the world, to some degree, is an important part of intelligence," according to Honavar. "To understand what it means to throw a ball, you have to be able to throw a ball."

AI technologies have quietly become so ubiquitous that they're already found in many consumer products.

"A huge number of devices that fall within the Internet of Things (IoT) space readily use some kind of self-reinforcing AI, albeit very specialized AI," Cagle says. "Cruise control was an early AI and is far more sophisticated when it works than most people realize. Noise dampening headphones. Anything that has a speech recognition capability, such as most contemporary television remotes. Social media filters. Spam filters. If you expand AI to cover machine learning, this would also include spell checkers, text-recommendation systems, really any recommendation system, washers and dryers, microwaves, dishwashers, really most home electronics produced after 2017, speakers, televisions, anti-lock braking systems, any electric vehicle, modern CCTV cameras. Most games use AI networks at many different levels."

AI tools can outperform humans in some narrow domains, just as "airplanes can fly longer distances, and carry more people than a bird could," Honavar says. AI, for example, is capable of processing millions of social media network interactions and gaining insights that can influence users' behavior — an ability that the AI expert worries may have "not so good consequences."

It's particularly good at making sense of massive amounts of information that would overwhelm a human brain. That capability enables internet companies, for example, to analyze the mountains of data that they collect about users and employ the insights in various ways to influence our behavior.

Advertisement

How AI Could Transform the Economy

Given AI's potential to do tasks that used to require humans, it's easy to fear that its spread could put most of us out of work. But some experts envision that while the combination of AI and robotics could eliminate some positions, it will create even more new jobs for tech-savvy workers.

"Those most at risk are those doing routine and repetitive tasks in retail, finance and manufacturing," Darrell West, a vice president and founding director of the Center for Technology Innovation at the Brookings Institution, a Washington-based public policy organization, explains in an email. "But white-collar jobs in health care will also be affected and there will be an increase in job churn with people moving more frequently from job to job.

Advertisement

New jobs will be created but many people will not have the skills needed for those positions. So the risk is a job mismatch that leaves people behind in the transition to a digital economy. Countries will have to invest more money in job retraining and workforce development as technology spreads. There will need to be lifelong learning so that people regularly can upgrade their job skills."

And instead of replacing human workers, AI may be used to enhance their intellectual capabilities. Inventor and futurist Ray Kurzweil has predicted that by the 2030s, AI have achieved human levels of intelligence, and that it will be possible to have AI that goes inside the human brain to boost memory, turning users into human-machine hybrids. As Kurzweil has described it, "We're going to expand our minds and exemplify these artistic qualities that we value."

Advertisement

Frequently Answered Questions

What are the 4 types of AI?
There are four types of AI: reactive, limited memory, theory of mind, and self-aware. Reactive AI is the simplest form of AI, and it is what most people think of when they think of AI. Limited memory AI can remember past events and use them to make decisions. Theory of mind AI can understand the thoughts and emotions of others. Self-aware AI is aware of its own thoughts and emotions.

Advertisement

Loading...