Acoustic Sensor Mimics Human Ear, Accurately Hears Voices From Afar

A research team at the Korea Advanced Institute of Science and Technology (KAIST) unveils its biomimetic resonant acoustic sensor that can accurately detect voices at great distances. The sensor employs a multi-resonant, ultrathin piezoelectric membrane that mimics the basilar membrane of the human cochlea.

In 2018, Professor Keon Jae Lee demonstrated the initial concept of a flexible piezoelectric acoustic sensor, based on the concept that humans can accurately detect far-distant voices using a multi-resonant trapezoidal membrane with 20,000 hair cells. Unfortunately, the first acoustic sensors were too large for use in smartphones and other mobile devices.

(a) Schematic illustration of the basilar membrane-inspired flexible piezoelectric mobile acoustic sensor (b) Real-time voice biometrics based on machine learning algorithms (c) The world’s first commercial production of a mobile-sized acoustic sensor

Today, the KAIST team’s flexible acoustic sensor is a miniature of the original and is now embeddable into a variety of portable products. The team fabricated a mobile-sized acoustic sensor using ultrathin, highly-sensitive piezoelectric membranes. They found the ultrathin polymer underneath inorganic piezoelectric thin film broadens resonant bandwidth to cover the entire voice frequency range using seven channels. Based on this theory, the team successfully demonstrated the miniaturized acoustic sensor mounted in commercial smartphones and AI speakers for machine-learning based biometric authentication and voice processing.

According to the researchers, their resonant mobile acoustic sensor boasts better sensitivity and multi-channel signals over conventional condenser microphones with a single channel. They also claim it exhibits highly accurate and far-distant speaker identification with a small amount of voice training data. Reportedly, the error rate of speaker identification was lower by 56% in 150 training datasets and 75% in 2,800 training datasets when compared to that of a MEMS condenser device.

Professor Lee says, “Google has been targeting the Wolverine Project on far-distant voice separation from multi-users for next-generation AI user interfaces. I expect that our multi-channel resonant acoustic sensor with abundant voice information is the best fit for this application.” He has also set up Fronics Inc. in Korea and the US to commercialize the flexible acoustic sensor. For more details, visit KAIST.

| info@matdirjish.com | 1-516-422-1431 |

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s