It’s Stardate 47025.4, in the 24th century. Starfleet’s star android, Lt. Commander Data, has been enlisted by his renegade android “brother” Lore to join a rebellion against humankind — much to the consternation of Jean-Luc Picard, captain of the USS Enterprise. “The reign of biological life-forms is coming to an end,” Lore tells Picard. “You, Picard, and those like you, are obsolete.”
That’s Star Trek for you — so optimistic that machines won’t dethrone humans until at least three more centuries. But that’s fiction. In real life, the era of smart machines has already arrived. They haven’t completely taken over the world yet, but they’re off to a good start.
“Machine learning” — a subfield within the quest for artificial intelligence — has invaded numerous fields of human endeavor, from medical diagnosis to searching for new subatomic particles. Thanks to its most powerful incarnation — known as deep learning — machine learning’s repertoire of skills now includes recognizing speech, translating languages, identifying images, driving cars, designing new materials, and predicting trends in the stock market, among uses in many other arenas.
“Because computers can effortlessly sift through data at scales far beyond human capabilities, deep learning is not only about to transform modern society, but also about to revolutionize science — crossing major disciplines from particle physics and organic chemistry to biological research and biomedical applications,” computational neuroscientist Thomas Serre wrote in the 2019 Annual Review of Vision Science.
A proliferation of new papers on machine learning have flooded the scientific literature in recent years. Reviews of this new research have covered such topics as health care and epidemiology, materials science, simulations of molecular interactions, fluid mechanics, clinical psychology, economics, vision science, and drug discovery.
These reviews spotlight machine learning’s major accomplishments so far and foretell even more substantial achievements to come. But most such reviews also remark on intelligent machines’ limitations. Some impressive successes, for instance, reflect “shortcut” learning that gets the right answer without true understanding. And much so-called machine intelligence is narrowly focused on a specific task, without the flexible cognitive abilities possessed by people. A computer that can beat grandmasters at chess would be mediocre at poker, for example.
“In stark contrast with humans, most ‘learning’ in current-day artificial intelligence is not transferable between related tasks,” writes computer scientist Melanie Mitchell in her 2019 book Artificial Intelligence: A Guide for Thinking Humans.
“We humans tend to overestimate artificial intelligence advances and underestimate the complexity of our own intelligence,” Mitchell writes. Fears of superintelligent machines taking over the world are therefore misplaced, she suggests, citing comments by the economist and behavioral scientist Sendhil Mullainathan: “We should be afraid,” he wrote. “Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.”
Machine learning’s swift progress
Typically machine learning relies on computing systems known as neural networks. Those networks emulate the human brain, with processing units based on the brain’s nerve cells, or neurons. In a traditional neural network, a layer of artificial neurons receives inputs that modify the strength of the connections to the neurons in another layer, where patterns in the input can be identified and reported to an output layer. Such an artificial neural network can “learn” how to classify input data as, say, an image of a cat.
In the last decade or so, the dominant machine learning strategy has relied on artificial neural networks with multiple layers, a method known as deep learning. A deep learning machine can detect patterns within patterns, enabling more precise classifications of input.
In some systems, the learning is “supervised,” meaning the machine is trained on labeled data. With unsupervised learning, machines are trained on large datasets without being told what the input represents; the computer itself learns to define categories or behaviors. In another approach, called reinforcement learning, a machine learns to respond to input with actions that are “rewarded” (perhaps by adding numbers to a memory file) if they help achieve a goal, such as winning a game. Reinforcement learning demonstrated its power by producing the machine that beat the human champion in the game of Go.
But success at Go, while worthy of headlines, is not nearly as notable as machine learning’s more practical successes in such realms as medicine, industry, and science.
In medicine, machine learning has helped researchers cope with weaknesses in trials testing disease treatments. Such treatments typically rely on average results to determine effectiveness, and can therefore miss possible benefits for small subgroups of patients. One trial, for instance, found that a weight-loss program did not reduce heart problems among people with diabetes. But a machine learning algorithm identified a subset of patients for which weight loss did reduce heart problems, as infectious disease expert Timothy Wiemken and computer scientist Robert Kelley noted in the 2020 Annual Review of Public Health.
Machine learning has also assisted in finding new drugs to test. “Deep learning has been widely applied to drug discovery approaches,” chemist Hao Zhu writes in the latest Annual Review of Pharmacology and Toxicology. “The current progress of artificial intelligence supported by deep learning has shown great promise in rational drug discovery in this era of big data.”
As with discovering new drugs for medicine, machine learning has aided discovery of new materials for industry. Searching for “superhard” materials resistant to wear and tear can be streamlined with machine learning algorithms, as illustrated in a case study that showed “the powerful role that machine learning can play in the identification of new structural materials,” materials scientist Taylor Sparks and colleagues wrote in the 2020 Annual Review of Materials Research.
Besides practical uses, machine learning also offers advantages for basic scientific research. In particle accelerators, such as the Large Hadron Collider near Geneva, protons smashing together produce complex streams of debris containing other subatomic particles (such as the famous Higgs boson). With millions of collisions per second, physicists must wisely choose which events are worth studying. It’s kind of like deciding which molecules to swallow while drinking from a firehose. Machine learning can help distinguish important events and can help identify particles produced in the collision debris.
“Deep learning has already influenced data analysis at the LHC and sparked a new wave of collaboration between the machine learning and particle physics communities,” physicist Dan Guest and colleagues wrote in the 2018 Annual Review of Nuclear and Particle Science.
Limits on learning
But machine learning also has its downsides. Its successes should not blind scientists to its faults.
For one thing, a machine’s “intelligence” is limited by the data it learns from. Machines trained to screen job applicants by analyzing human hiring decisions, for example, can learn biases that discriminate against certain groups.
Even when machines perform well, they are not always as smart as they seem. Reports of skill in recognizing images, for instance, should be tempered by the fact that a machine’s accuracy often is based on its top five “guesses” — if any of the five is correct, the machine gets credit for an accurate identification.
Often a machine performs accurately but does not understand a task like a human does. Rather the machine finds a shortcut that frequently produces a correct answer. “A deep neural network may appear to classify cows perfectly well — but fails when tested on pictures where cows appear outside the typical grass landscape,” Robert Geirhos and collaborators wrote in a recent paper online at arXiv.org. In that case, “grass” is the system’s shortcut indicator for “cow.”
Sometimes machines rely on texture rather than shape to identify objects. If a picture of a cat is converted to look like an embossed image in various shades of gray, a machine might think it’s an elephant.
Such shortcuts may be one reason machines can be easily fooled by adversarial efforts at deception.