Pandemic redefines data architecture for manufacturers
COVID-19 has upended traditional work structures. Business experts and analysts predict that current work-from-home scenarios could become permanent for many employees as companies realize productivity gains and improved worker satisfaction.
The increase in remote work is leading to another awakening in the manufacturing sector: We’re seeing the influx of data to and from offsite activities motivating leaders to reconsider their current systems architecture. Instead of focusing on moving data from point to point, forward-thinking leaders understand the benefits of a standardized, semantic data model to scale and accelerate projects long term. Moving to a standard data model removes a significant integration barrier for manufacturers.
Unfortunately, many companies have become overly reliant on custom-coded integration software to get IIoT projects up and running quickly without regard to future technology or business needs. It’s a common problem that we often refer to as “technical debt.” The fast and easy route in the early stages of integration must be repaid later when it’s time to replace machines, optimize processes or adapt production lines to new products. Technical debt is an issue that’s becoming more apparent as manufacturers move more of their work offsite and increase their remote-monitoring capabilities.
Aiming for quick wins
The temptation for quick wins is understandable given the current economic environment. Covid-19 is actually speeding IoT adoption for some manufacturers. While it might seem logical that belt-tightening measures would slow or halt IoT investments, studies show that’s not always the case. Body-temperature monitoring, contactless operations and contact tracing are becoming standard safety practices across many manufacturing plants around the world.
This type of vigilance is leading to more—not less—technology in the factory to monitor worker activities and automate processes. For instance, about one-fourth of manufacturers in Asia expect to fast-track automation programs to compensate for pandemic-related worker shortages, according to a report published by McKinsey & Co in July. And 73% of IoT users said COVID-19 has prompted them to accelerate the pace of IoT adoption, according to a survey published by British telecommunications company Vodafone.
Many organizations are already accelerating adoption of solutions that enable quick wins during the pandemic, including contact tracing, enforcement of new safety guidelines and support for remote collaboration, the McKinsey study shows. However, adoption rates vary when it comes to solutions that merge data from IT and operations technology (OT) like digital twin simulations and advanced analytics.
The technical debt trap
The McKinsey study supports many of the same problems we’ve heard from manufacturers over the years as they try to scale their IIoT implementations. Many moved fast, hoping for immediate results. To move quickly, they relied on internal development teams to deploy solutions using custom scripting or coding and APIs.
Custom integration coding may work for linking traditional back-office platforms like ERP and CRM systems that have tried-and-true data models that do not change. But OT machinery and systems are more complex. Machinery has evolved over time with proprietary interfaces and very little adoption of standard data structures. Similarly, whether it’s a scheduling, quality management or manufacturing-execution system, vendors develop applications with a unique system-specific language and plant model, says Charlie Gifford, a senior smart manufacturing consultant with 21st Century Manufacturing Solutions.
During a recent discussion with Gifford about technical debt, he mentioned that each system may have different elements with the same name or a single object with multiple names across systems. For example, the term “mix” or “run” or “order” in one system may mean something else in another. That leads to corrupt data because manufacturers lack a “single version of the truth.” As good data is exchanged from process into operations and business systems, the content and definition of an element is lost due to poor mapping by uninformed data scientists.
In today’s economic climate, this is a recipe for disaster. Think about how quickly things are changing. From shutdowns to restarts to completely new business models, manufacturers must be ready to pivot. If manufacturers are forced to constantly rewrite code each time they need to make a change on the factory floor, like adding more machines or new product lines, they’re losing ground to competitors.
DataOps closes the gap
Despite the challenges, the current health crisis has only strengthened the case for more digitization in manufacturing, according to the McKinsey study. To accomplish this, the report’s authors contend that:
“Many, if not most, companies will want to assess their current IT and OT systems, upgrading them to deliver the horsepower that advanced use cases in digital and analytics depend on—particularly to support the Internet of Things. A scalable, obsolescence-resistant IT stack is essential.”
Industrial DataOps is the missing piece that many manufacturers have been seeking in their quest to deliver OT data to an IT environment in a way that makes sense to key decision makers. In the early stages of the Industry 4.0 revolution, manufacturers borrowed from technologies coming out of the consumer world—such as data lakes, analytics and visualization software—from the likes of Amazon, Microsoft and Google. They were able to tie that to Wi-Fi, cloud computing and new sensing technologies to bring greater connectivity to their operations.
But a gap exists in the middle when it comes to providing enough context around the information so data scientists can understand it. Industrial DataOps offers a separate data-abstraction layer, or hub, to securely collect data in standard models for distribution across on-premises and cloud-based applications.
An industrial DataOps approach prevents data degradation that can occur in traditional point-to-point configurations because the information is no longer hidden in custom code between applications. Systems are all connected through a single integration hub. Having access to a seamless flow of ready-to-use data is more critical now that more employees are working offsite or on furlough as manufacturers continue to ramp up operations.
The ‘new normal’
Even before the pandemic, remote-facility visibility was important for many manufacturers, especially in process industries such as pulp and paper or chemicals. These companies often have many sites with multiple systems that perform a wide range of operations—from continuous, raw materials processing to hybrid batch and discrete packaging. Integrating data from all of these processes is a major hurdle. Finding and maintaining technical support teams at each site isn’t always practical.
To reach the level of performance analysis they need, the corporate group can define uniform data models and deploy them to the locations, which can then install them in an edge-native industrial DataOps hub. Engineers at the corporate level can ingest information objects from each location into standard analytic models or visualization dashboards. This allows them to add or modify systems without costly downtime to rewrite code or rip-and-replace obsolete technologies.
Corporate decision-makers receive uniform, high-quality data. This accelerates analytics cycle time and provides the momentum and agility organizations previously lacked to make an enterprise-level digital transformation.
The seeds for the “new normal” were planted long ago. Manufacturers knew they would need to find more efficient and effective ways to utilize IoT data. Now, Covid-19 is hastening the inevitable. The days of rewriting code for every modification are no longer sustainable. An industrial DataOps solution harmonizes data so manufacturers can remain nimble amid uncertain times.
By John Harrington, co-founder & chief business officer at HighByte