hero-edge-computing

Building your edge compute infrastructure to reap the full value from their data

Feb. 25, 2022
What the heck is "data scaffolding"?

By Aric Prost, senior global director OEM with Stratus Technologies

In an era of unprecedented industrial information, much of this new data may be bottlenecked where it is generated at the operational edge. This data is at risk of being stranded due to bandwidth and connectivity constraints, aging IT systems, or latency. 

Existing infrastructure requires modernization and new efficiency to create value in the form of increased safety, equipment availability, and profit achieved through actionable insights. 

But how do you do that? 

Organizations can rapidly modernize factory-floor equipment with edge-computing platforms, whether to run HMI SCADA applications close to critical equipment, store and analyze sensor data, or consolidate industrial software to digitalize processes or create smart machines. Properly architected edge-computing infrastructure creates the data "scaffold" imperative to harness data locally for sharing across the plant floor or to the cloud. Much like a physical scaffold, data scaffolding creates a supportive framework with which to turn data into actionable insights and competitive advantages. A proper data facilitates data flow and information availability throughout an organization.

Acquire edge data for actionable insight 

According to a McKinsey report published at the outset of the pandemic, digitizing operations was a rapid step manufacturers needed to take in order to adjust to the new normal. The report noted that "greater use of advanced analytics and big data could optimize risk management." Indeed, manufacturers’ digitization efforts are often measured against how well they improve operations reliability and safety/environmental performance.

To be effective, these digitization projects must be geared toward improving analytics while laying the groundwork for predictive maintenance, artificial intelligence (AI), and machine learning (ML). Each requires large amounts of high-quality data, and manufacturing industries have struggled to make their data usable for these endeavors. 

Edge computing provides the local processing and data storage to enable these capabilities. When it comes to more complex equipment, this is especially crucial as data often flows in a closed-loop, creating siloes of databases and information. For digital-transformation projects to succeed, data sources need to be identified, filtered and connected.

Issues with data can arise from an insufficient quantity or availability of data, limited access to databases, corrupted or incomplete data, missing format and tags between sources, and security issues. All of this can derail the value of data acquisition. With edge computing, manufacturers can more easily organize, analyze and protect data to better understand how machines are working together, how well they fit in the process, and where improvements can be made.

Edge computing enables smart machines to become smarter

Digital transformations need free-flowing data—information availability—in order to do the higher-level analytics required for AI, machine learning, predictive maintenance, and big-data projects. Unfortunately, production, operation and equipment data often exist in a silo, closed off from the rest of the organization. These sources of data need to be connected to see the full picture and ensure decisions are based on identification of the root cause.

In this way, data platforms or architectures are helpful scaffolding; they allow companies to collect, store, analyze and share data with verified individuals on both local and remote networks. This ensures faster information-sharing within an organization, and faster insights as a result. Data scaffolds can also help fill in missing data, which helps organizations utilize AI, ML and predictive maintenance—and paves the way for resiliency in a rapidly-shifting environment. Along with this, edge-computing systems can integrate with modern architectures while adding functionality as needs change, giving manufacturers more flexibility to interpret data and make adjustments as they need.

Unlocking data at the edge

According to IDC, pandemic-induced workforce changes and operations practices will “be the dominant accelerators for 80% of edge-driven investments and business model changes in most industries” through 2023. 

Manufacturing isn’t immune to these trends. Digitization efforts connect data providers to a larger analytic framework and will help drive larger business decisions with all factors from the shop to the top floor. As physical devices become increasingly sophisticated (think sensors, valve actuators, motor starters, etc.), they’ll have analytics capabilities on the same level as PCs. Data from these devices—and a company’s data writ large—must be managed correctly. Any gaps in how an organization manages its data degrades its ability to make sound decisions and can affect critical processes like product change-over, planned downtime, equipment replacement and processing changes. 

But data is complex, and the relationships of how data are stored, shared and used continue to evolve. Operating and manufacturing companies have moved toward digitization and integrating data into their processes and decision-making, which is a vital step toward agility in a landscape that has already developed a dependency on the data it knows about. 

However, data on its own isn’t enough to stay nimble: decision-makers must make sure they have the proper scaffolding in place for their data. Data must be "unlocked" in such a way that it can flow through an organization. 

Without the proper data scaffold, digital transformation projects can easily fall flat. By deploying a modern edge-computing architecture, manufacturers can gain valuable insights from critical equipment to gain the insight and efficiency of AI, ML, and predictive analytics.