Artificial intelligence for cause, effect & optimizing your manufacturing processes
By Cyrus Shaoul, CEO of Leela AI
The close link between the human brain and our eyes is an immensely powerful tool that we all use to “keep an eye on” what is going on around us. For years, manufacturers have drawn upon a similar concept with “seeing machines” that combine cameras and computers to improve product inspection. Lately, we have seen a related category of visual-AI applications designed to analyze productivity.
Like product inspection, the new process-intelligence software applies AI algorithms to camera data. Yet, instead of inspecting still images to improve quality, it inspects processes within video sequences to improve productivity.
At the highest level, process-inspection software can achieve a holistic view of production operations. Visual-intelligence software can identify and time every step in the manufacturing process and combine data from multiple cameras to create a unified analysis. The software acts like a continuous time-and-motion study, measuring and analyzing complex interactions between people, tools, machinery, robots, parts and products. The resulting insights enable manufacturers to identify bottlenecks and safety violations, compare productivity between shifts and locations, and capture and transfer best practices.
While IIoT analytics focuses on equipment status, visual-process inspection fills in the gaps between the machines. This is especially useful for high-touch operations with many human workers, as well as high-mix operations with frequent process changes. Manufacturers can quickly adjust to changes in labor, supplies, and products, thereby increasing capacity and reducing downtime.
Despite the differences between product and process inspection, many process-inspection solutions are based on the traditional neural networks (NNs) used for product inspection. The drawback is that it is time-consuming and expensive to train NN algorithms to recognize complex processes.
Traditional NNs may be able to handle a workstation application with limited actions and objects, but they struggle with holistic productivity analysis. Huge amounts of training video are required, and small changes in the process require massive retraining. In many cases, the software is unable to identify complex processes at all.
The case for causal AI
To overcome these limitations, a new type of hybrid-causal-NN AI has emerged that combines causal/symbolic reasoning (or common sense) with traditional NNs. Causal AI can identify relationships between cause and effect, thereby streamlining automatic process identification.
Causal AI is a new twist on early AI research, which attempted to imbue computers with symbolic reasoning. Drawing on child-development research by Jean Piaget, the research aimed to spark self-directed learning by imitating human learning.
Symbolic AI researchers failed to achieve their goals due (in part) to computing restraints. As computing power has soared, the industry shifted to NNs, which excel at recognizing patterns and correlating perception and memory, but struggle with understanding cause and effect or learning new concepts.
Just as a child learns through sensorimotor experience by moving its hands and visually perceiving the results, a causal AI learns by testing out hypotheses. Yet, a child also learns by recognizing and remembering objects. An AI architecture that combines the best of both methods has more in common with the human brain and is better suited to human collaboration.
In a hybrid-causal AI, traditional NNs do most of the work of translating pixels to classified objects. The causal-learning network supervises the NNs, predicting the results of actions and stepping in with feedback when the NNs get stuck. When a causal agent has low confidence, the NNs can help confirm or deny the results, providing a reverse feedback loop.
A similar hybrid architecture drives human thinking. According to Daniel Kahneman’s book ‘Thinking Fast and Slow,’ the brain has two distinct operating systems. System 1 is designed for fast responses and handles most of our thinking. (It is also frequently wrong.) System 2 steps in occasionally to apply its more logical thinking to important decisions. In the hybrid causal-NN architecture, the NN is System 1 and the causal agent is System 2.
Faster training, more accurate analysis
Using a hybrid causal-NN architecture, manufacturers can greatly reduce AI training time. The AI can heuristically learn on its own with as little as 10% of a typical NN’s training data. A causal AI can also better explain its reasoning, making it easier to improve.
Another advantage is more accurate analysis. Much of the difficult work in process inspection involves analyzing the actions of people using techniques such as pose detection. Whereas NNs often struggle with this, a causal agent can correct the errors by applying physics models, action-inference heuristics, and temporal analysis—estimating a feasible range for human motion over time.
The causal AI comprehends concepts such as object permanence, which can stump an NN, and can apply commonsense reasoning to more quickly understand what is happening in context. With explicit training, the causal model can eventually learn about the relationship between many different perception and action modalities and connect cause to effect. The result is a higher level visual-intelligence solution that can help manufacturing operations achieve benefits such as decreased cycle times and increased capacity.