The bearing condition monitoring industry is in the middle of an architectural transition. For the past decade, the default architecture has been straightforward: sensors capture vibration data, transmit it to a cloud platform, and the cloud performs analysis, alerting, and reporting. This architecture works — but it has limitations that become apparent in certain deployment environments and at certain scales.
Edge AI — performing machine learning inference and signal processing on local hardware (the sensor itself or an on-site gateway) rather than in the cloud — addresses several of these limitations. But it introduces its own trade-offs. Understanding when edge processing provides genuine advantages over cloud-only architectures helps engineers make informed architecture decisions rather than following marketing trends.
The Case Against Cloud-Only Architectures
Latency
In a cloud-only architecture, vibration data travels from the sensor to a local gateway, across the internet to a cloud server, through an analysis pipeline, and back to the operator as an alert. Round-trip latency for this process is typically measured in minutes to hours — acceptable for trending and maintenance planning, but inadequate for applications that require near-real-time response.
For bearing monitoring specifically, latency matters most during rapid degradation events. A bearing that transitions from “watch” to “alarm” to “danger” in under an hour needs faster response than a system that processes data in batch intervals. Edge processing reduces detection-to-alert latency from minutes to seconds by eliminating the network round trip.
Bandwidth and Data Costs
High-frequency vibration data is large. A single accelerometer sampling at 25.6 kHz generates roughly 50 KB per second of raw data — about 4.3 GB per day. For a deployment with 50 sensors, that is over 200 GB per day of raw data that must be transmitted, stored, and processed in the cloud.
In industrial environments with ethernet connectivity, bandwidth may not be a constraint. But in remote locations — offshore platforms, mining sites, marine vessels, railway rolling stock — data transmission depends on cellular, satellite, or limited-bandwidth radio links where per-megabyte costs are measured in dollars, not fractions of cents.
Edge processing addresses this by extracting features locally — spectral peaks, defect frequency amplitudes, health scores — and transmitting only the compact results. A feature vector summarizing a 10-second vibration capture might be 200 bytes, a compression ratio of over 1,000:1 compared to the raw waveform. This makes continuous monitoring feasible on bandwidth-constrained connections.
Connectivity Dependency
Cloud-only systems stop working when the network connection fails. For permanently installed industrial plants with redundant network infrastructure, connectivity downtime is rare. For mobile assets (railway, marine, trucking), remote sites (mining, oil and gas), and facilities with unreliable network infrastructure, connectivity gaps are routine.
An edge-processing architecture continues monitoring, detecting anomalies, and generating alerts regardless of network status. Results are cached locally and synchronized when connectivity returns — but critical alerts are generated immediately, on-site, without waiting for the cloud.
Data Sovereignty and Security
Some organizations — military, defense contractors, certain government agencies, and companies operating under strict data governance policies — cannot send equipment operational data to third-party cloud platforms. Edge processing keeps sensitive data on-premises, transmitting only anonymized summaries or alerts to external systems if needed.
Even for organizations without strict data governance requirements, keeping raw vibration data local reduces the attack surface for data breaches and eliminates dependency on a third-party cloud provider’s security posture and business continuity.
What Edge AI Actually Does for Bearing Monitoring
The term “edge AI” covers a range of processing capabilities. For bearing condition monitoring, the relevant edge AI functions are:
Anomaly Detection
Statistical models running on the gateway learn the normal vibration signature of each monitored bearing during a baseline period. Subsequent measurements are compared against this baseline, and deviations that exceed a statistical threshold trigger an anomaly alert. This does not require knowing the bearing model or its defect frequencies — the system detects change from normal, which catches unexpected failure modes that frequency-based monitoring might miss.
Bearing Defect Classification
When an anomaly is detected, defect classification algorithms analyze the spectral content to determine the likely fault type: outer race (energy at BPFO and harmonics), inner race (energy at BPFI with shaft-speed sidebands), rolling element (energy at 2× BSF), or cage (modulation at FTF). This classification runs on the gateway using the bearing geometry and current shaft speed — both stored in the local configuration database.
Health Scoring
Health scores condense the multi-dimensional vibration assessment into a single value (typically 0–100) that non-specialist operators can interpret. The edge gateway computes health scores from multiple indicators — overall vibration level, defect frequency amplitudes, spectral shape changes, crest factor — and provides a simple “good / watch / alert / danger” classification without requiring the operator to interpret FFT spectra.
Trend Analysis
The gateway maintains historical health scores and spectral summaries for each bearing, enabling local trend analysis without cloud connectivity. Trend projections — “at the current degradation rate, this bearing will reach the alarm threshold in approximately 6 weeks” — run on the gateway using locally stored data.
When Cloud Still Makes Sense
Edge AI does not replace cloud processing for all use cases. The cloud retains advantages in several areas:
- Fleet-wide analytics. Comparing bearing performance across hundreds of machines at multiple sites — identifying which bearing models fail earliest, which operating conditions accelerate degradation, which maintenance practices correlate with longer bearing life — requires aggregating data from many locations. The cloud is the natural platform for fleet-level analytics.
- Model training. Machine learning models used for anomaly detection and classification are trained on large datasets of labeled vibration data. This training happens in the cloud (or on dedicated compute infrastructure), and the trained models are then deployed to edge devices. The edge performs inference; the cloud performs training.
- Long-term archival. Storing years of vibration data for regulatory compliance or historical analysis is more practical and cost-effective in cloud storage than on local gateway hardware.
- Remote access and reporting. Cloud dashboards provide remote visibility for engineering managers, reliability teams, and third-party service providers who need to review equipment condition without being physically on-site.
The Hybrid Architecture
The most practical architecture for most deployments is hybrid: edge processing for real-time monitoring, anomaly detection, and alerting; cloud for fleet analytics, model training, long-term storage, and remote access.
In this architecture, the edge gateway handles time-critical functions autonomously — it does not wait for the cloud to detect a bearing fault or generate an alert. The cloud receives summarized results (health scores, anomaly flags, spectral summaries) on a regular schedule and provides the broader analytical context that edge devices cannot generate in isolation.
The key design principle is that the edge must function independently. If the cloud goes down — or if the network connection is interrupted for days — the monitoring system continues to protect the equipment. The cloud enhances the system; it does not enable it.
For a deeper technical discussion of edge computing architectures for bearing monitoring, see our article on why local processing beats cloud-only architectures.
Evaluating Edge AI Claims
As edge AI has become a marketing term, it is worth asking specific questions when evaluating systems that claim edge processing capabilities:
- What exactly runs on the edge? Some systems perform FFT computation locally but transmit full spectra to the cloud for anomaly detection. Others perform complete anomaly detection and classification locally. The distinction matters for latency and connectivity dependency.
- Does the system function without cloud connectivity? This is the definitive test of edge processing. If the system cannot detect a bearing fault when the internet is down, it is not truly an edge architecture — it is cloud processing with local data buffering.
- How are models updated? Edge AI models must be updated as new failure modes are encountered and as the system learns from more data. Ask how model updates are deployed — over the air, via manual update, or through the cloud — and whether updates require system downtime.
- What hardware runs the inference? Edge AI computation requires more processing power than simple data acquisition. Ask what processor runs on the gateway, what its power consumption is, and whether it can handle the computational load of monitoring all connected sensors simultaneously.
Edge AI for bearing condition monitoring is not a binary choice between edge and cloud. It is an architectural decision about where to place computation for optimal latency, reliability, bandwidth efficiency, and cost. For deployments where connectivity is reliable and unlimited, cloud processing may be sufficient. For deployments where latency matters, connectivity is intermittent, bandwidth is expensive, or data sovereignty is required, edge processing is not optional — it is essential.