Only a few weeks ago we discussed the importance of fog computing for the IoT, and just this week Intel formally announced its Intel® Atom Processor E3900 Series, its first device specifically targeted at bringing fog computing to the sensor.
Fog computing is about intelligently allocating data analysis across the network, from sensor to data center, such that the right analysis is done in the right place at the right time. This reduces latencies, lowers power consumption and reduces the amount of total data that needs to be sent to the data center. This reduces the data load on key parts of the network, enhances security and lowers power consumption. And, the less data transported, the less exposure to hackers and the less time a sensor or other node upstream spends in active mode.
This reduction in network load was one of the key points Intel made when it announced the E3900, pointing to Cisco figures of 50 billion devices being connected by 2020, which together will create 44 zettabytes (or 44 trillion gigabytes) of data. Annually.
Bringing Power to the Fog
Finding an efficient approach for handling the issues raised by the expanding IoT was the impetus behind the formation of the OpenFog Consortium by Cisco, along with Intel, ARM, Dell, Microsoft and Princeton University.
Intel is helping define the underlying architecture of fog computing, but with the E3900 it is also contributing directly to providing efficient processing horsepower right at the very edge, in – or next to – the sensors themselves.
The new processor was announced at IoT Solutions World Congress, and it is 1.7 times more powerful than the previous generation (as measured by SPECint*_rate_base2006 (1-copy)), supports 8 Gbytes of LPDDR4 memory, uses the Intel 14-nm process and runs at up to 2.5 GHz, and comes in a low-power FCBGA package for application flexibility.
Of course, “low power” is a relative term, and depends to a large degree on the application requirements. The E3900 consumes 6.5 to 12 watts, and it integrates between 4 and 8 cores.
However, what’s not relative is the addition of a very important, new feature: Intel® Time Coordinated Computing Technology (Intel® TCC Technology.)
Intel® TCC Technology Pushes Boundaries
One of the problems with the IoT is the asynchronous, random and downright chaotic fashion in which data is generated and transported. This leads to inefficiencies and unnecessary latency as bottlenecks clear or relays or gateways power up.
Intel® TCC Technology coordinates and synchronizes peripherals and networks of connected devices to give more determinism and predictability. For example, in robotics manufacturing, Intel® TCC Technology can synchronize the clocks of devices across networks to within 1 µs. It’s available only for the Linux Yocto OS, for now.
Other features of the new chip include the ability to handle 4K video at 60 fps and three graphics pipelines supporting three displays. With clear applications in industrial automation, the E3900 includes a developer kit supporting computer vision kernels and libraries based on OpenCL and OpenVX standards. Performance wise, the 12- to -18 Gen9 Goldmont graphics cores reach between 106 and 187 GFLOPS.
The E3900 is sampling now and has already got wide support from the likes of Delphi, Neusoft, Hikvision, FAW and others.
Ken Caviasca, vice president in the Internet of Things Group and general manager of platform engineering and development, Intel
“The IoT renaissance is now, and I am eager to see the new applications yet to come,” said Ken Caviasca, vice president in the Internet of Things Group and general manager of platform engineering and development at Intel Corporation, upon making the announcement.
For developers of IoT solutions, this is all good news. Being able to get more processing horsepower closer to where more analytics can be done faster and more efficiently is a great calling card on your next client visit or design team meeting.