My son is pretty excited about the books they’re studying this year: “Hamlet,” “Slaughterhouse 5” and “1984.” I had to confess to not having read George Orwell’s 1984, but I look forward to reading it with him, and comparing it to what’s happening now, partly as a result of the rise of embedded vision combined with digital signage.
As my colleague pointed out in her piece on signage for schools, the digital signage market is expanding rapidly, possibly reaching $22 billion by 2020. The attraction of digital signage is its flexibility to clearly show ads for retail or emergency messaging and general notifications for schools and other public areas.
However, just putting content on a screen, while useful, is only the tip of the iceberg. For digital signage, a revolution is underway: the embedding of vision capability into those signs. The attraction here is the ability for a display to detect when a customer or passer-by is within range. It can then scan the person’s face, see where their eyes are fixated, provide relevant messages, detect facial reactions and adjust messaging as needed to elicit a positive response.
This may sound far-fetched but it is already possible and being implemented to various degrees. Groups such as the Embedded Vision Alliance are hosting annual events dedicated to discussing the latest techniques and technologies for both the physical embedding of vision sensors, along with the algorithms and processing hardware required to make sense of the data and apply it.
Image Sensors Evolving Rapidly
In one paper given during an Embedded Vision Alliance event in May 2017, Paul Gallagher, Senior Director of Technology and Product Planning at LG Electronics, pointed out that we are way past using cameras for “pretty pictures.” Smartphones were a boon to manufacturers of low-cost CMOS-based image sensors, but the market is saturated. While the market is growing at less than 6% for sensors in handsets, the costs are falling more rapidly than the market is growing, at 9% to 12% reductions in price per year. As Gallagher points out, the use of multi-cams, while beneficial, won’t offset the price reductions (Figure 1).
Figure 1. The applications driving smart cameras for the IoT include wearables, robots and home surveillance. Not shown is digital signage for retail. (Image source: LG Electronics, Courtesy of Embedded Vision Alliance)
The emergence of embedded vision, with a focus on sensing, not sensors, is where the emphasis now lies. “The person who owns the devices isn’t necessarily the person who will be seeing the image.” These new smart camera markets include drones, surveillance, automotive, augmented reality, robots, toys and gaming. “The sensors will be focused on feeding data to a control system,” said Gallagher.
Feeding data to a control system is not new to industrial and manufacturing, but it is to digital signage. One company that is well aware of the potential of digital signage to affect outcomes when combined with IoT is American Megatrends Inc. (AMI). Long known for its PC BIOS technology, AMI has developed a back-end server that combines a content management system for in-store displays with IoT sensing capability that can detect customers within a certain range of the display and send advertisements or other messages to them directly. It’s still in its early stages of IoT integration, but it’s emerging.
The next step is detecting user status, gestures, gaze, physiology, gender, emotional state and of course, identity. Correlating identity to preferences for retail has its security and privacy issues, but already the general user has shown a tendency to overlook privacy and identity concerns for the sake of a “good deal.”
Stars Aligned for Machine Vision for IoT
As Gallagher said in his presentation, the stars are aligned for embedded image sensors. IntelTM designed its own “Real Sense” imagers, but now boutique sensors design houses are fielding calls from other processor manufacturers to get image sensors overlaid on ICs to have a single-package solution. The IC will have to filter image information as not all image data is needed for machine vision and it’s advisable to not clog up networks with high-definition video, much of which is redundant.
For IoT solution providers, getting involved with the Embedded Vision Alliance is the best way to get moving on what is fast becoming the next revolution in machine data. Ping Jeff Bier over there, he’s a friend of mine, at: firstname.lastname@example.org. Mention my name, it may help, or maybe not. It’s been a while since we spoke.
Either way, embedded vision is the physical instantiation of Orwell’s vision. Big Brother really will be watching. The question now becomes, what are the social and political ramifications? We may need some new rules for this game.