Is industrial artificial intelligence destined for an “AI Winter”?
By Dr. Stefan G. Hild, head of data science at ei3 Corporation
Few areas in computer science have, over recent years, created as much interest, promise and disappointment as the field of artificial intelligence. The manufacturing industry, now the latest target application area of AI, puts much hype on AI for predictive maintenance.
Will AI deliver this time or is disappointment inevitable?
In engineering, the development of AI was arguably driven by the need for automated analysis of image data from air reconnaissance (and later satellite) missions at the height of the Cold War in the 1960s. A novel class of algorithms emerged that applied back-propagation to non-binary decision trees to force convergence of input data toward previously undefined output clusters. For the first time, these algorithms, dubbed “neural networks,” had the ability to self-develop a decision logic based on training input, outside the control of a (human) designer.
The results were often spectacular, but occasionally, spectacularly wrong: since the learnt concepts could not be inspected, they could also not be validated, leading to systems being untraceable—failures could not be explained.
In early days the computational complexity of these algorithms often exceeded available processing power of computer hardware, at least outside of classified government use. Applying AI to solve real problems proved difficult; virtually no progress was made for more than a decade—a decade that was later referred to as the first “AI Winter,” presumably in analogy to the “Nuclear Winter” and in keeping with the themes of the time. Engineers were forced to wait for Moorse’ Law (which stipulated that processing power doubles every 1.5 years—a law that held through much of the second half of the 20th century) to catch up with the imagination of 1960s mathematicians.
It finally did and, in the 1980s, expert systems emerged that revived the concepts of AI and found some notable real-world applications, although the concept of fully autonomous learning was often replaced by explicit human-guided teaching. This alleviated some of the issues posed by algorithmic untraceability, but also erased much of the luster of this intelligence.
Temperatures again fell to winter levels—the second AI Winter.
Fast-forward another 30 years, and processing power, storage capacity, and the amount of available data have advanced to a level that might pave the way for yet another attempt at applying AI to real-world problems, based on the hypotheses that more than enough data is available within any given domain to feed relatively simple clustering algorithms running on cheap and plentiful processors to create something of value.
Industry heavyweights are betting significant resources on the promise of AI and have, without a doubt, demonstrated significant achievements: machines are winning against human contestants in televised knowledge quizzes and the most complex strategy games. Robot vehicles navigate highways with impressive success. It is curious that progressing these achievements to broader adoption appears to be spotty at best: applying quiz-show knowledge management to assist doctors to diagnose medical issues appears to have failed. Taking the robot vehicle from the highway to city streets is fraught with autopilot upsets.
The list of failed attempts at AI is longer and growing faster than the list of success stories. Is the next “AI Winter” inevitable?
The fear of another winter is so pervasive among the AI research community that many avoid the two-letter acronym altogether, instead using the less loaded term of machine learning, or the more general data science. Tackling the underlying issues would, of course, be preferable to avoiding the challenges at a purely linguistic level.
Managing expectations
Confronted with an AI-based project approach, clients typically react in one of two ways. The first possible reaction is fear (the HAL 9000 response, in reference to the bad-mannered AI protagonist in Arthur C. Clarke’s “Space Odyssey”); if not of a science-fiction induced image of evil machines exterminating mankind, then at least of job losses and unemployment due to automation replacing all machine operators, service technicians, mechanics, or other shop-floor craftsmen.
The second is delusion; that there be a general-purpose machine-based intelligence that will solve all problems quickly and cheaply—after all, it also won that TV quiz show, right?
Both responses, while equally wrong, are induced by the same misperception that an artificial intelligence and a human intelligence share the same type of intelligence—nothing could be further from the truth: machines fail miserably at tasks that every five-year-old child can easily master—consider the game Jenga.
Machine intelligence leaves us in awe due to the vast amounts of information it can retrieve, categorize and serve. This works when the problem is contained to a narrow, well-defined domain. It appears clever, but is little more than information retrieval; there is never an understanding of the data, the problem, or the question asked. Moreover, there is no creative act.
It has been proposed that it might be better to think of AI as augmented intelligence—AI as a means to extend the reach, availability or precision of human intelligence, much like glasses enhance aging human eyesight. AI assists human experts, rather than replacing—or exterminating—them!
Controlling the application domain
The absence of any creative ability implies that AI systems have to learn exclusively by example, with mathematical interpolation being the only way to fill in gaps between examples. For this to work well, the application domain must be narrow, and the training data must be both plentiful and clean.
While the amount of data needed to understand relationships between variables obviously depends on the complexity of those relationships, the cleanliness of the data is often harder to manage. Real-world data sets are full of noise—and most learning algorithms are extremely sensitive to false input in their training sets; many AI algorithms perform well in the lab, only to fail miserably in the real world when subjected to noise input data.
Aside from measurement noise, changing environmental or operating conditions (operational noise) are also reasons for concern and failure: algorithms are forced to adapt their baseline continuously, effectively re-entering the training phase whenever such operational change occurs. In such cases, overfitting or co-linearity induced by too much data may eventually be as detrimental to the algorithm performance as too little data.
Best results are therefore achieved for systems that are narrowly defined, stable, and well understood based on a clean data set derived from real-world operation. Results of high accuracy can be achieved for such systems, but be aware that uncertainties—as small as they may be—compound quickly to levels that render the final results useless when systems are composed of several such sub-systems.
Artists, not their tools, make art
Although the defining property of artificial-intelligence systems is that they are able to learn unknown concepts purely based on training input, guidance by human experts greatly reduces time, the amount of required data, the danger of untraceable findings, and increases accuracy. AI algorithms are a tool in the bag of data scientists and human experts, but the latter drive the project, not the tool.
Like a chisel, AI algorithms are implements that will create art only in the hand of an artist.
AI projects, like any other software project, benefit greatly from an agile, iterative approach based on discussion of algorithmic data findings between data scientists and domain experts—a physicist, design engineer or maybe the machine repairman.
It is the skill of those domain experts that the AI system is based on; taking their input throughout the development process is as obvious as it is essential.
Let the heatwave pass
Can another AI winter be avoided?
Hype surrounding AI has pushed the industry into a heatwave. Dropping temperatures are not only normal but also desirable and ultimately healthy. Reducing over-inflated expectations and focusing on winning the AI war one battle at a time will establish confidence: simple machine parts—such as bearings, heating elements, etc., hold the key to successful projects. Predicting their failure is achievable, yet yields disproportionate benefits to the overall machine operation.
Optimizing process variables to reduce energy consumption has a fast, measurable, positive affect on machine yield—and the operators sustainability record. Applications such as these are great success stories for a promising and valuable technology, and have great financial benefit to the users who adopt them. Ultimately these successes in the real world will ensure the temperatures will only drop to seasonal norms.