Artificial Intelligence Hears the Sound of Healthy Machines

Artificial Intelligence Hears the Sound of Healthy Machines

Artificial Intelligence Hears the Sound of Healthy Machines
Examples of abnormal signals. Shown are raw data, log-spectrogram, and obtained coefficient distribution per level for two abnormal measurements of the slider rail 0 at SNR 0 dB. Credit: DOI: 10.1073/pnas.2106598119

Sounds offer crucial info regarding how well a device is running. ETH scientists have now developed a new machine learning strategy that detects whether a machine is “healthy” or demands maintenance.

Whether railway wheels or generators in a power plant, whether pumps or valves, they all make sounds. For experienced ears, these noises even have a meaning: devices, machines, equipment, or rolling stock sound differently when they work correctly compared to when they have a deficiency or fault.

The sounds they make, therefore, give professionals useful clues as to whether a device remains in an excellent or “healthy” condition or whether it will quickly demand maintenance or immediate repair. Those who identify in time that equipment sounds malfunctioning can, relying on the situation, prevent an expensive defect and intervene before damages.

The observation and analysis of sounds have been gaining significance in the operation and maintenance of technical infrastructure, specifically since recording tones, noises, and acoustic signs are made relatively cost-effective with contemporary microphones.

To extract the demanded data from such sounds, proven signal processing, and data analysis methods have been established. One of them is the so-called wavelet transformation. Mathematically, tones, sounds, or noise might be represented as waves.

Wavelet transformation decompounds a function into a set of wavelets, wave-like oscillations localized in time. The underlying idea is to determine how much of a wavelet is in a signal for a specified range and area. Although such frameworks have been quite successful, they still might be a time-consuming task.

Detecting deficiencies at an early phase

Now ETH scientists have established a machine learning technique that makes the wavelet transformation entirely learnable. This new technique is ideal for high-frequency signals, such as sound and vibration. It allows to immediately identify whether a device sounds “healthy” or not. The method established by postdoctoral scientists Gabriel Michau, Gaëtan Frusque, and Olga Fink, Professor of Intelligent Maintenance Systems, and now released in the journal PNAS, joins signal processing and machine learning innovatively.

It enables an intelligent algorithm, i.e., a calculation rule, to do acoustic monitoring and sound evaluation immediately. Due to its resemblance to the solid wavelet transformation, the recommended machine learning method provides excellent interpretability of the outcomes.

The scientists’ objective is that soon, professionals running machines in the industry will be able to use a device that automatically supervises the tools and alerts them in time without demanding any specific previous knowledge when perceptible, abnormal, or “unhealthy” sounds happen in the equipment. The new machine learning process applies not only to different types of devices but also to distinct sorts of signs, sounds, or vibrations. For instance, it recognizes sound frequencies that people, such as high-frequency signs or ultrasound, can not listen to naturally.

Nonetheless, the learning process does not merely beat all sorts of signs over a bar. Instead, the researchers have created it to spot the subtle distinctions in the several types of sound and create machine-specific findings. This is not trivial because there are no defective samples to learn something from them.

Credit: ETH Zurich

Concentrated on healthy sounds

Collecting numerous representative sound examples of malfunctioning machines in authentic industrial applications is usually not practical, considering that defects seldom occur. Instructing the algorithm on what noise data from faults may seem like and how they differentiate from healthy sounds is not attainable.

The scientists, consequently, educated the algorithms in such a way that the equipment learning algorithm learned how a machine usually sounds when running effectively and then acknowledges when a sound differs from usual.

To do this, they utilized a variety of sound information from pumps, fans, valves, and slide rails and selected an approach of “unsupervised learning”, where they were not the ones who “told” an algorithm what to learn, however instead the computer learned the relevant patterns autonomously. In this manner, Olga Fink and her team enabled the learning process to acknowledge associated sounds within a provided equipment type and distinguish between certain sorts of faults on this basis.

Even if a dataset with defective samples would have been accessible, and the authors might have been able to educate their algorithms with both the healthy and defective sound samples, they would never have been sure that such a classified data collection contained all sound and fault variants.

The sample could have been incomplete, and their learning technique may have missed essential fault sounds. Furthermore, the same sort of equipment can generate extremely distinct sounds depending on the intensity of usage or the ambient conditions, so that even technically practically identical defects could sound quite different depending on an offered machine.

Learning from bird sounds

However, the algorithm is not merely applicable to sounds made by machines. The researchers also evaluated their algorithms to differentiate between different bird songs. In doing so, they used audio from bird viewers. The algorithms had to learn to differentiate between different bird songs of a particular species, guaranteeing that the kind of microphone the bird watchers utilized did not matter: “Machine learning is intended to recognize the bird songs, not to assess the recording method,” says Gabriel Michau.

This learning effect is also vital for technical infrastructure: even with the machines, the algorithms have to be agnostic to the mere background noise and the influences of the recording method when intending to identify the relevant sounds.

For a future industrial application, it is crucial that machine learning will be able to identify the subtle differences between sounds: to be helpful and reliable for professionals in the field, it should not alert too often or miss relevant sounds. “With our study, we could demonstrate that our machine learning technique detects the anomalies among the sounds and that it is flexible enough to be utilized to distinct sorts of signals and tasks”, says Olga Fink.

A vital feature of their learning technique is that it can also observe the sound development to detect signs of possible defects from the way the sounds progress in time. This opens many exciting applications.


More information:

Gabriel Michau et al, Fully learnable deep wavelet transform for unsupervised monitoring of high-frequency time series, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2106598119

Read the original article on Techxplore.

Share this post