Why AI Still Can’t Replace Analysts: A Predictive Maintenance Example
New AI
models like GPT-4, Claude 3, and Gemini can process and
summarize large volumes of unstructured data, generate forecasts, and draw
analytical conclusions. Generative AI is modeling proteins, optimizing logistics,
and predicting consumer behavior. According to McKinsey, its
economic potential could reach up to $4.4 trillion annually.
Despite
its impressive achievements, AI remains significantly limited in
certain areas of analytics. It still cannot make long-term economic
forecasts and struggles to predict sudden market shifts. Industrial equipment
data analytics is one of the fields where AI still falls short.
I
have been working in the field of industrial analytics for over 10 years and
have witnessed how this sector has undergone transformations and evolved
through the introduction of new technologies. Today, artificial intelligence
can detect even the slightest signs of malfunction. But I am convinced: AI
still cannot work independently. In predictive maintenance, the role of the
human analyst remains critical.
How
AI is involved in predictive maintenance
Predictive
maintenance forecasts equipment failures by leveraging historical and real-time
data from IIoT sensors, along with machine learning and artificial
intelligence.
Temperature,
vibration, load, and pressure — AI algorithms are trained on all these
equipment performance indicators. They analyze real-time data streams and
detect patterns that preceded past failures. AI systems can capture even the
slightest deviations from normal operating conditions that would go unnoticed
by humans. This may occur when a defect is just beginning to emerge.
Modern
PdM systems not only analyze the causes that led to a failure but also suggest
preventive actions that the maintenance team can take — for example, reducing
the load on the equipment, replacing a part, or changing the lubricant. In this
way, issues are resolved before they escalate into costly accidents. Among
companies that have implemented predictive maintenance in their
operations, 95% report financial benefits, and 27% saw a return on
their investment in less than a year.
However,
AI systems still lack full autonomy, and engineer-analysts remain a
critical part of predictive maintenance workflows. There are three main reasons
why AI, for now, cannot fully replace human expertise:
- Lack
of training data
We all know that AI models require vast amounts of historical (and high-quality!) data for training. In the case of industrial equipment, the situation becomes more complex: even by modest estimates, there can be millions of defect cases. However, when we need data where the equipment type, the defect’s stage of development, the operating conditions, and other parameters all align in a way that is relevant to a specific situation, it turns out that such data is scarce. If the equipment is new or rare, historical failure data may be entirely absent. In such cases, it is the engineer’s expertise that enables well-founded decisions. - AI
lacks contextual awareness
While we’re busy counting how much we saved on Black Friday deals, enjoying the fact that we bought everything we needed (and some things we didn’t need), in fulfillment centers everything is just getting started. Conveyor lines are running at full capacity, and if there were even a minor defect in one of the bearings, its degradation would accelerate. The result: a sudden breakdown, a line stoppage, and complaints from customers whose orders are delayed. Increased equipment load during peak periods like Black Friday — that’s context. And AI may fail to take it into account. An AI system tracks trends and reacts to changes in equipment behavior, but it cannot always link those changes to why and how operating conditions are shifting. This complicates accurate diagnostics and identifying the root cause of a failure. For more reliable conclusions, it needs data that covers a wide range of scenarios — and there may be hundreds of them. - Data
quality issues
IIoT technologies are radically transforming the approach to maintenance, but the quality of their performance directly depends on the quality of the data transmitted by sensors. And here, even the most advanced algorithm can fail. Production data can be noisy, incomplete, or distorted. Why does this happen? For example, vibration sensors may capture extraneous oscillations transmitted from neighboring equipment. In that case, AI may interpret them as a sign of a malfunction and issue a false alert. If this is not assessed by a human who knows that the power of the neighboring machine has increased, the maintenance team will, at best, waste time on unnecessary checks. The long-term consequence of such incidents is that the team may lose trust in the system and start ignoring alerts. Sensor data may also be lost due to a technical failure in the connection or because the battery in a wireless sensor has run out. An improperly installed or calibrated sensor will also produce false readings. An engineer-analyst can interpret such data in the context of the specific production process and distinguish a real malfunction from a measurement error.
How
much data does AI need?
Advanced
predictive maintenance systems, depending on the number of IIoT sensors they
work with, can collect billions of equipment performance measurements every
day. Algorithms scan this data for patterns and filter out those that might
indicate a defect. However, this is only a preliminary diagnosis — it must
still be verified by experienced analysts.
So
why can’t AI guarantee 100% diagnostic accuracy yet? Let’s take bearings as an
example.
Bearings
are present in nearly all industrial equipment, from motors to conveyors, and
account for around 40% of equipment failures. Their condition is
assessed through vibration data captured by IIoT sensors. These sensors
transmit a signal to the PdM system — essentially, an audio recording of the
mechanical humming. Using a mathematical algorithm called the Fast Fourier
Transform, this signal is converted from the time domain to the frequency
domain. A neural network, followed by a human engineer, then analyzes the
vibration data in both the time and frequency domains to assess the condition
of the bearing.
What
follows are extremely approximate calculations designed to illustrate the sheer
scale of the challenge facing AI developers.
Let’s
base our model on vibration signal components measured along three axes: X, Y,
and Z. Each measurement consists of 10,000 points in the frequency spectrum (a
typical example). Thus, the input vector for the neural network contains 30,000
numbers (10,000 spectral values per each of the three axes).
For
tasks with a large number of input parameters, the minimum number of training
examples should be 10 to 50 times the dimensionality of the input vector. This
helps prevent overfitting and ensures robustness to noise. However, this
estimate does not take into account the operational context of the bearing or
other important factors that affect the amount of data needed to train the
model.
In
the table, I list these factors with approximate values to illustrate how many
cases the neural network might need to accurately recognize and classify
bearing defects.
📌 Visit Us:
🌐
Website: https://statisticsaward.com/
🏆
Nomination: https://statisticsaward.com/award-nomination/?ecategory=Awards&rcategory=Awardee
📝
Registration: https://statisticsaward.com/award-registration/
Comments
Post a Comment