In days of old, computing power was king in the land of embedded systems. Processors embedded in such products as copying machines, airborne radar, and MRI machines had to crank through massive amounts of data in real time. The power consumption of these compute-engines was only a concern in so far as getting enough heat out of the chip to prevent it from melting down.
To be sure, embedded processors also did yeoman-like service in less compute-intensive uses that have included power tools and radios. But most such settings were in close proximity to ac power outlets. Energy efficiency was more of a nice-to-have feature than a priority.
This scenario has changed in recent years with the increasingly wide use of battery-powered appliances and the proliferation of EnergyStar-type efficiency mandates. In addition, the evolution of what once would have been called portable platforms into ‘mobile’ applications has brought more emphasis on power efficiency. Even the venerable PC is on its way to becoming an embedded system as computing platforms evolve into tablets, phones, and exotic systems such as a fighter-jet-style heads-up display superimposed on an ordinary set of glasses. All in all, embedded system designs that target mass-market applications now list power consumption among their primary design constraints.
The reason, of course, is that many of these embedded systems operate from batteries. The ultimate aim is to develop systems that consume so little power that they can operate mainly from the microwatt levels that characterize energy that can be harvested from sources such as vibration, waste heat, and a few square centimeters of incident light. Battery technology is advancing, but battery size is still an issue in an era when many embedded designs put a premium on svelte looks and long battery life.
The earliest approach to keeping a lid on power consumption was to simply shut off circuits when they weren’t needed, putting processors into various stages of a sleep mode. Like a hypnotic trance, sleep modes could range from deep, with the processor having little awareness of its surroundings; to light when it was important for processor circuits to spring back quickly to a fully functioning state.
An early example of low-power processing is the Atom processor from Intel Corp. Developed a few years ago, it targets automotive infotainment applications and has been upgraded with greater performance and consumption of less power. Intel now offers a version of the device called the Atom Z510 for solar-powered embedded computing. The device consumes just 2 W in active mode and as little as 100 mW in a deep-sleep mode.
Intel has continued to innovate in the area of low-power embedded processors. It recently demonstrated an experimental microprocessor core called IA, capable of unprecedented low-power operation. Code-named Claremont, it can operate at a near-threshold-voltage (NTV) low enough to draw power from a small solar cell. Intel Chief Technology Officer Justin Rattner showed the IA last September at the Intel Developer Forum as an example of the company’s direction in energy-efficient NTV designs.
The Claremont, designed for high-performance computing, is a heat-sink-free processor core that can be placed in an NTV mode that dissipates less than 10 mW in its minimum energy state. It is five times more energy-efficient than other comparable processors today. It also provides a wide dynamic operational range and can run at higher frequencies when performance is needed -- It can idle at 3 MHz and operate from 280 mV but ramp up to 915 MHz and 1.2 V. “It might not be a commercial product, but the research could be integrated into future processors and other circuitry,” says Rattner.
On another front, ARM Ltd.’s ARM Cortex-M series of microprocessor cores are shining examples of high performance from a small chip at low energy levels. Just about every major MCU manufacturer has introduced a low-end microcontroller based on various ARM Cortex-M series microprocessor cores. The list includes nearly 20 MCU makers.
One of the more notable cores is the ARM Cortex-M0+ family, which dissipate just 9 µA/MHz during operation. It is the smallest of all the ARM processors and consumes as little as 11.2 µW/MHz in a silicon area that includes less than 12,000 gates. It is actually a 32-bit processor with the same 56-instruction set (called the Thumb ISA) as other Cortex-M processors, but it has only a two-stage pipeline compared with the three-stage pipeline of its predecessor, thus reducing transistor count and energy consumption.
Cortex-M0+ will show up in domestic appliances, portable medical systems, smart meters, lighting, power, and motor-control systems. It was recently licensed to Freescale Semiconductor for use in Freescale’s Kinetis L series of MCUs, as well in MCUs made by NXP Semiconductor.
Both Texas Instruments (TI) and Atmel Semiconductor employ the ARM Cortex-M3 in their MCU platforms. TI employs it in the MSP430 platform, which finds wide use in consumer, health monitoring and fitness, home appliance, medical, and industrial applications where extremely low power consumption is a must.
TI recently introduced an MSP430 platform that uses ferro-electric RAM (FRAM) technology to provide a real-time clock that dissipates a measly 360 nA —- and, TI claims, has more than double the battery life of any other microcontroller available today. It consumes less than 100 µA/MHz in active mode.
TI thinks that low-power energy harvesting applications such as wireless sensors will be good candidates for these processors. “Low-power MCUs with FRAM eliminate power consumption and write-endurance barriers to energy harvesting, allowing developers to make the world ‘smarter’ with more cost-efficient and simpler designs,” says Jacob Borgeson, TI’s MSP430 Group marketing manager.
Other processors targeting low-power embedded applications include Atmel’s AVR MCUs, which employ the company’s picoPower technology. Atmel claims the MCUs have lowest power consumption in the industry, drawing 500 nA at 1.8 V with the real-time clock running and 9 nA in power-down sleep mode.
Microchip Technology has long been supplying controllers with an emphasis on low power consumption via its eXtreme low-power MCUs that utilize the company’s nanoWatt XLP technology. Products using this technology are said to offer the industry’s lowest current consumption for run and sleep modes. This is important given the industry consensus that battery-operated applications spend more than 90% of their time in sleep modes.
Members of the eXtreme low-power nanoWatt XLP technology group include the PI24FJ128GAC310 16-bit MCU family, with multiple power management options for extreme power reduction. In the deep-sleep mode, they dissipate just 40 nA at 3.3 V.
How low can we go?
How much lower can IC operating voltages scale? IC line geometries are shrinking further and further, and Moore’s Law is squeezing more functions into each chunk of silicon chip area. But basic physics is making it tougher to scale down chip operating voltages.
Remember the days when the operating voltage of most ICs was 5 V? It dropped to 3.3 V, then to 3.0 V, to 2.5 V, and now is nearly at 1-V levels. During the golden age of CMOS IC technology in the early 1980s, it was relatively easy to scale down line geometries for ICs because power only scaled up linearly with faster circuit speed. And heat sinking for chips running at higher speeds was manageable. So for decades, power reduction was given lip service and not much thought.
However, it is a different world when a transistor operates at or near threshold voltage (NTV) around 400 to 500 mV, close to the “threshold” voltage at which transistors turn on and begin to conduct current. The only way to realize chips able to operate at these levels, at least today, is by lowering the operating frequency to a few megahertz. And even sub-threshold level operation, though it may seem counter intuitive, has been demonstrated to be possible.
One problem with CMOS ICs operated at 400 to 500 mV is that gains in power efficiency begin to be offset by the fact that the transistors must stay on for longer periods, and thus consume energy, because they are forced to run at slower clock rates. Moreover, the process of scaling to sub-threshold levels is hampered by minute variations in IC wafer processing which are hard to overcome.
Nevertheless, chip manufacturers that include Intel, IBM, Samsung and the Taiwan Semiconductor Manufacturing Corp. (TSMC) foundry have been experimenting with various ways to switch on transistors at sub-threshold levels. Intel has demonstrated what’s called a FinFET structure with a tri-gate transistor architecture that can operate at 0.7 V and below. The ‘fin’ moniker stems from the formation of the device on a thin silicon-on-insulator finger termed a fin.
Many companies have investigated FinFET architectures over the years. Intel’s version involves wrapping three metal gates around a transistor’s channel in 3D. The transistors operate in a way that entails a steep sub-threshold turn-on/turn-off slope, allowing them to shut off fast with tiny amounts of leakage current in the off state and to turn on at a lower threshold voltage.
One firm working on an alternative to the FinFET structure is SuVolta Inc., which has designed a low-power CMOS platform called PowerShrink. Through use of a structure called a deeply depleted-channel (DDC) transistor, SuVolta says it can reduce the amount of variation transistors experience in their threshold voltages. This variation both reduces performance and boosts power consumption, and additionally limits the practical degree to which power supply voltage to the chip can be scaled down.
By reducing the variation in threshold voltage, SuVolta’s technology helps limit the amount of current leaking through the transistor because it eliminates the worst-case tail of threshold voltage distribution that causes most leakage. SuVolta says it’s seen threshold voltages of 0.6 V and lower with the technique.
Others, like Global Foundries Inc. (GloFo), the former manufacturing arm of AMD, are still working on 2D planar transistor structures at the 22-nm and 20-nm level. Global and other members of the Joint Development Alliance (JDA), formed by a group of companies collaborating on chip R&D, say they’ll focus on the FinFET architecture when planar technology can no longer provide the horsepower necessary to work at smaller process nodes.
However, a GloFo spokesperson speaking at a recent technology forum says that the firm has worked on FinFET development for the last ten years. In particular, the company has focused its FinFET efforts on the 14 nm process for mobile system-on-a-chip ICs (SoCs). GloFo says its FinFET is basically optimized for mobile SoCs.
And it isn’t just hardware technology alone that leads to embedded systems with low energy consumption. Designers are learning that a systems approach can be more important than ICs with rock-bottom power requirements.
The word used to describe this systems approach is “co-design.” In a nutshell, it has come to mean making predictions of power consumption during the first part of the design effort, before much in the way of hardware or software has been defined. Analytical tools that aid this effort are becoming more widely used. And metrics coming out of an up-front analysis of this sort give designers targets to shoot for. It also helps lower power consumption levels by giving engineers opportunities to optimize the packaging, power sources, and ancillary circuitry during the initial phase of a project, when decisions in the interest of power consumption are likely to have less impact on performance and cost.
TI claims it pioneered the co-design approach. It attacks power consumption issues by optimizing individual subsystems through, for example, process technologies that balance off-mode leakage effects with active current performance. “The first step is in setting the goals of a product’s power and performance parameters. Once they’re determined, the process can be designed to provide the required performance without exceeding the device’s power budget,” explains TI 28-nm platform manager Randy Hollingsworth.
Power estimation at TI starts when a chip is first conceived, beginning with the register-transistor logic (RTL). Specialized co-design software tools are available for making power estimations at the RTL level. These tend to go well beyond the capabilities of ordinary EDA tools that often focus on optimizing chip real estate for a given fabrication process and tend to give estimates of power consumption that are inaccurate until relatively late in the design of the chip.
Sensor networks loom
One power-sensitive application looming large on the horizon is wireless sensing. Uses envisioned for this technology usually have such sensor nodes installed in locations far away from ac lines where battery power and energy harvesting are the only options.
If power consumption becomes less of an issue, it’s likely wireless sensing will show up in some unexpected places. Take, for example, the Nest Learning Thermostat (Nest Inc.) , which might be called an intelligent wall thermostat for people who don’t like to program intelligent wall thermostats. Its 32-bit microprocessor includes a TCP/IP stack and Wi-Fi capability, and it is powered by a rechargeable lithium-ion battery.
Nest’s thermostat is designed to automatically control a home’s heating and cooling by learning the occupants’ heating and cooling comfort habits. It needs only three pieces of information to get started: the home’s zip code, whether it is in heating or cooling mode, and temperature settings when occupants are away.
Nest says programmable thermostats are widely installed, but only about 11% of them are properly programmed to save energy. The new thermostat targets the 20 to 30% savings in home energy costs that are available by actually dialing back heating and cooling capacity when they aren’t truly needed.