Intel’s CEO Brian Krzanich recently elaborated upon the core tenets of the company’s strategy going forward. Along with IoT he discussed new memory and architectures, Moore’s Law, more connectivity, data centers, automotive and silicon photonics, while FPGAs kept coming up as a key enabling technology, and for good reason.
FPGAs, or field-programmable gate arrays, or exactly that: massive arrays of logic elements (LEs) that can execute a series of processing functions very efficiently and in parallel, then be reconfigured via software to do something else – after they have been deployed in the field (Figure 1.) Without an FPGA, that new function would have to be executed using newly installed hardware.
The Stratix 10 FPGA was announced by Altera (now part of Intel) last year and represents the cutting of FPGA design with a new HyperFlex architecture, Intel’s 14-nm Tri-Gate process and a 2x core logic frequency enhancement over competitors.
Historically, FPGAs were considered clunky, power hungry, costly and hard to program. They were perfect – but really only for prototyping and early-stage implementations, before all the “kinks” of the design were worked out. Over time, however, much has changed with respect to FPGAs themselves, as well as market needs, particularly with respect to the data centers that form the IoT cloud, and more recently, automobiles.
Fast forwarding through the 90’s and 2000s shows a shift from discussing processor gigahertz to discussing how many cores in a processor and how efficiently those cores were used by the operating system and applications they were running. This fundamental shift toward multicore processing came about as a result of the need for lower power consumption: faster processors burned more power, got hotter, and required more physical cooling and better system – and building – thermal management.
Instead of going faster, it became clear that more could be done, more efficiently, if the data processing load was shared across multiple cores. This sharing across multiple similar processing cores is a homogenous architecture, and works well, but now the thinking has moved toward achieving as much functionality per watt as possible. This means optimizing the processing of data and execution of code even further, by breaking them down to the type of processing architecture that is best suited to performing specific functions most efficiently.
It turns out that microprocessors (with their classic von Neumann architecture) are really good at directing traffic and managing a system’s I/O, but graphics processing units (GPUs), digital signal processors and FPGAs can execute certain types of data processing functions much more efficiently using dedicated hardware (gates) structured for that purpose. Now, the goal is to combine these multiple architectures in what’s called a “heterogeneous” approach.
This is where FPGAs come in: They are really flexible, configurable, parallelizable, and process data in hardware, versus software, so they are faster and more efficient, under the right circumstances. Also, FPGAs have come a long way in terms of ease of use. Where once arcane hardware description languages (HDL) were required to program them, the adoption of OpenCL now allows a heterogeneous architecture with a microprocessor, GPU and FPGA, to be programmed and compiled within a single environment, versus needing C or C++ for the microprocessor and then HDL for the FPGA.
In addition, both the cost and power consumption of FPGAs has fallen dramatically, to the point that many portable consumer devices now include smaller FPGAs to execute processing-intensive interface functions, such as PCIe interfaces and wireless baseband processing.
Now we are in a position where the massive power consumption of the data centers, upon which we rely to process and analyze the rising flood of petabytes of unstructured IoT data, has created an enormous heat and power management issue.
This has put the onus upon server designers to rethink architectures and employ FPGAs to execute repeatable functions, such as I/O translations, really efficiently.
FPGAs Keep Automotive and IoT in Fast Lane
This takes advantage of the FPGA’s performance capabilities. But what about its reconfigurability? For sure, upgrades to data centers and their servers will be required and FPGAs will enable that, but Krzanich’s mention of automotive was no accident.
Automotive designs and changes used to take place slowly over six to 10 years, but now they are happening almost in real time, especially with respect to in-cabin entertainment and system sensing and data gathering and exchange. Still, automotive designers need to be sure what they design into a vehicle works, and stays working, regardless of changes.It’s harder to fix or replace hardware than software, so now a system based on a low-power FPGA can be updated with new functionality – years after it has been in the field and without the need for a costly hardware change. It can be done at the service station or even over the air with connected vehicles. This means the latest features can be added to a car long after it has left the manufacturing line.
Krzanich said it well from Intel’s point of view, “We see tremendous opportunity in the growth of this virtuous cycle – the cloud and data center, the Internet of Things, memory and FPGAs all bound together by connectivity and enhanced by the economics of Moore’s Law – which will provide a strong and dynamic future for Intel.”
For providers of current state-of-art IoT systems being deployed across various industries and segments, it’s an exciting time as those systems will soon be able to take advantage of the efficiencies, raw performance and flexibility this new era of low-cost heterogeneous processing will provide.