Connect the Dots: Ultrasonic Sensors Reduce Human-IoT Interface Friction

Create: 12/20/2017 - 18:15

Image: Chirp Microsystems
 

Sensors are not only IoT’s interface to the real world, they are also the human interface to IoT, so as solution providers we are tasked with making that interface as frictionless as possible. One way to reduce friction is to use an ultrasonic sensor that can detect motion, presence, range or gestures.

Last week, we discussed the fusion of 3D LiDAR and 2D camera images to make vehicles smarter and safer. LiDAR uses time-of-flight (ToF) of laser pulses to sense range and depth. Depending on the application it can be accurate to the millimeter and has a range of meters to kilometers. However, while useful, it is relatively expensive, consumes a lot of power, has a narrow range, and can be bulky and affected by ambient light.

The CH-101 and CH-201 ultrasonic sensors announced by Chirp Microsystems, on the other hand, are small, low cost, ultra-low power and are immune to ambient noise. Also, as far as ultrasonic sensors go, they are the most accurate when it comes to sensing depth, being able to resolve down below one millimeter, while consuming only micro-amps of current. With regard to range, they are limited to a meter for the CH-101 and 5 meters for the CH-201.

The applications Chirp envisions for its sensors don’t require long range, but instead take advantage of the wide dispersion of ultrasonic transducers, up to 180˚. To be clear, ultrasonic sensors comprise a transducer (speaker) and a microphone (sensor) that work in tandem: the transducer emits the ultrasonic sound wave, while the sensor picks up the reflection.

Like classic sonar, ultrasonic sensors use the time between wave emission and reflection detection to determine range, but multiple sensors can also track gestures. (Image: IEEE Spectrum)

When used in a smartphone, the sensor can detect proximity to the user’s ear to turn off the screen and save power, for example. When used with an IoT device, it can detect when someone comes close and can turn a screen on for manual input, consuming only 15 µWatts in wait mode.

This manual input aspect is particularly interesting. Chirp has developed a proprietary and patent-pending gesture classification library based on machine learning and neural network algorithms. Used with the sensors, the software library enables intuitive, natural gesture-based, low-friction user interfaces for IoT devices.

To sense gestures accurately, at least three sensors are required, then a proprietary IC with Chirp’s trilateration algorithm is used to determine the hand’s location, direction, velocity and shape in 3D space. These are used as inputs to control an IoT device, or a system to which it’s connected, such as a drone or robot. It’s also proving to be useful for virtual reality or augmented reality (AR) gaming or training applications.

For IoT solution providers, it should be clear that the applications can extend far wider than even Chirp can imagine, which is usually the case with any new enabling technology. The combination of gesture, presence, range and motion sensing with low cost, low power, small size (3.5 x 3.5 mm, including the processor), a simple I2C serial output and all operating off 1.8 V, opens the imagination.

On factory floors, they can be used as proximity sensors to guide robots on the move, or protect humans when robots or machinery gets too close by shutting or slowing them down. Dispersed throughout the home or office, they can detect personal identifying attributes and respond to hand motion to turn on lights or audio systems, without resorting to voice commands.

Fierce Competition and Fierce Fusion

From Chirp’s point of view, it’s been researching and refining its entry into this competitive space for many years, working out of its home at the University of California, Davis. The technology is based on the use of microelectromechanical systems (MEMS) to realize a piezoelectric micromachined ultrasonic transducer (PMUT) that converts pressure from the sound wave to electrical energy. The reverse process generates the initial sound wave: an electrical voltage deforms the PMUT membrane to produce a controlled ultrasonic sound wave.

Chirp knows it faces stiff competition from the likes of ams, Bosch, InvenSense and others that are using MEMS for sensing purposes. It also faces incumbent technologies such as light, touch, pressure, capacitive touchscreens and voice to interact with machines and IoT devices.

From an IoT solutions providers’ point of view, it’s good to have a new option to explore, both for its own inherent attributes, as well as for its ability to fuse with and augment the other sensing options to make human-IoT device interaction intuitive and frictionless. 

About Author

Patrick Mannion
Patrick Mannion is an independent writer and content consultant who has been working in, studying, and writing about engineering and technology for over 25 years.

Latest Videos

more