The era of the Smart Factory and Industry 4.0 may be here, but production downtime still casts a shadow over most manufacturing facilities, oil refineries and water treatment plants. Indeed, almost every factory loses 5% of its productivity capacity through faulty equipment, a loss of value that for the automotive sector alone is worth 28 million euros daily.
The mission to tackle the challenge of downtime is ongoing. However, it has progressed significantly in recent years, driven by Big Data. We are now flooded with data from a variety of sources that tell us far more about the inner workings of assets and equipment. The challenge remains making sense of, and integrating, all the intelligence now available to enhance financial and operational performance, says Alessandro Chimera, industry consultant at TIBCO Software.
The reactive approach
Looking back at the evolution of maintenance, it’s clear that we have come a long way – from a starting point of no data insight at all. For example, for the average factory of the 1980s, mechanical wear and tear routinely brought operations to a standstill. As the production line lay idle until the fault was diagnosed and repaired, costly and frustrating downtime was incurred.
With no immediate insight into the causes of the problem, and an entirely reactive approach to maintenance, the average uptime and asset reliability was just 50%, a situation that reduced productivity and had the capacity to harm brand and reputation.
Taking control: Scheduled maintenance
At the turn of the 21st century, manufacturers could no longer afford the disruption caused by random breakdowns. Greater competition and regulation demanded more reliable asset management and signalled the introduction of planned or scheduled maintenance. This meant solving a problem before it had the chance to occur or escalate, with regular scheduled services aimed at extending the lifespan of systems or equipment.
Some may say it was over zealous; regular premptive checks meant production would still grind to a halt and, while expected, this remained costly and disruptive. As such, attention turned to the need to better understand the context behind machine faults to drive a more targeted and measured response to maintenance.
Data-driven proactive maintenance
As a tool to identify the root causes of failure and apply the learning to avoid a repeat, data propelled in a new era of proactive maintenance. Machine sensors measuring factors such as temperature, pressure, flow, voltage or current, began to unlock the inner workings of equipment. This drove the emergence of Root Cause Analysis, which applied a problem-solving methodology to determine which factors represented the most effective means to predict machine failure. Meanwhile, statistical process monitoring and defect elimination strategies also came into play.
In today’s fast and complex operating environment, where even smallest delay can see a loss of competitive advantage, we need to go a stage further. It’s why predictive maintenance has emerged as the game changer, enabling industrial companies to anticipate maintenance requirements and respond accordingly to avoid both breakdowns and maintenance delays.
Here, machine learning and streaming analytics combine, as connected real-time data brings visibility to […]
The post Why industrial analytics is the asset for today’s production line appeared first on IoT Now – How to run an IoT enabled business.
Read more here:: www.m2mnow.biz/feed/Posted on: February 14, 2018