Turning up the heat on chip temperatures

Intel Australia Pty Ltd
Wednesday, 04 October, 2006

The demand for ever more powerful computers that by necessity demand ever more power and generate ever more heat, is a major challenge for chip manufacturers such as Intel. Here the company outlines some of the problems and points a way to solutions as the shadow of Moore's Law hangs over the future.

More than a quarter of a century ago, Intel co-founder Gordon Moore predicted that transistor density on integrated circuits doubles every two years.

As computing and communications converge, the demand increases for devices with more functionality, faster operation and lower costs. Keeping pace with Moore's Law is essential to help computing and communications industries deliver chips that meet these needs.

However, as more transistors are integrated onto a chip to enable more functions and higher frequency is used to obtain increased performance, total power consumption increases and results in more heat.

As more transistors are packed into a smaller area, the power density also increases. Consumers' desire for mobility and multifunctional small form factor devices places additional challenges like efficient battery operation.

Efficient power and thermal management are vital as systems become smaller or more capable with every generation of Moore's Law. Power must be delivered and used efficiently by the chips, wiring and display, while effectively dissipating heat from the system - economically, of course.

Addressing the power challenge allows a continuation of the trend towards smaller, faster chips and devices. As a result, futuristic applications that require more powerful processors may be realised.

Increasing capability and density of computing and communications chips may be sustained. Comprehensive power and thermal management techniques are a fundamental part of continuing to receive the benefits of Moore's Law.

The company's researchers, scientists and engineers take a holistic approach to power and thermal management. Engineers examine every aspect of the design, manufacture and use of computing devices, looking for variables that could influence the power equation.

New process technologies, breakthrough transistor materials and structure, innovative circuit and microarchitectural designs, novel packaging materials and techniques, improvements to system components and software optimisation techniques that provide power efficiencies, are all being explored.

Back in the 1980s, Intel switched its process technology from NMOS to CMOS to better manage power requirements.

Now that power has become a key differentiator for devices of all sizes and types, the company is applying experience to address the full range of computerised systems, from mobile phones to servers.

Raw processing performance is only one of several key vectors that will define future innovation in micro-architectures and circuit design.

The next decade will see a number of architectural changes at every level, from transistor structure to integration of entire systems, which will continue to drive to a key goal: to maximise power efficiency at every phase of the design.

In August 2004, two years after 90 nm technology was first introduced, Intel showcased its next-generation 65 nm (65 billionths of a metre) technology.

This allows printing individual circuit lines smaller than a virus and creating transistor gate widths of 35 nm.

These extremely small and fast transistors are the building blocks of high-speed microprocessors. However, as transistors shrink, leakage current can increase and managing that heat is crucial for reliable high-speed operation. This is becoming an increasingly important factor in chip design.

In the 90 nm process, strained silicon was introduced. 'Uni-axial' strained silicon reduced leakage current by five times or more without diminishing performance (ie, on-current).

Similar techniques result in greater savings in leakage current as the company scales to the 65 nm process, as Intel's second-generation strained silicon reduces leakage current by another four times or more.

Reduced leakage current means better power efficiency and less heat dissipation per transistor.

Copper interconnects were introduced in the 130 nm process to lower resistance. Carbon-doped-oxide (CDO) was first introduced in the 90 nm node to lower interconnect capacitance.

Lower resistance and capacitance both decrease power requirements and the company has developed a second-generation CDO for its 65 nm node with even lower capacitance.

The 65 nm process has an extra metal interconnect layer that improves density and performance, enhancing power and clock distribution as well as all the signals on the chip.

Power-efficient transistor materials and structures that might be used in future process technologies are also being studied.

Transistors act as silicon-based switches that process the ones and zeros of the digital world. A gate electrode turns the transistor on and off and a gate dielectric is an insulator beneath it.

The voltage of the gate electrode above it controls the flow of electric current across the transistor. The gate dielectric (currently made of silicon dioxide) thickness in a 65 nm transistor is 1.2 nm, which represents a thickness of five atomic layers.

Silicon dioxide has been used as a dielectric for almost three decades.

However, as silicon dioxide gets thinner, electric leakage current through the gate dielectric increases and leads to wasted current and unnecessary heat.

To keep electrons flowing in the proper location and greatly reduce this critical source of heat, Intel plans to replace the current material with a thicker so-called "high-k" material, significantly reducing current leakage.

The high-k material has 100 times less leakage than silicon dioxide. Since this new gate dielectric is not compatible with today's transistor gate electrode material, the company has developed a new metal gate technology, allowing this process to be suitable for high volume manufacturing.

These discoveries are being integrated into an economical and high-volume manufacturing process to address the power and heat increases in smaller nanostructures.

A three-dimensional design that will allow the manufacture of transistors that scale, perform and address the current leakage problem seen in smaller dimension planar transistors has been developed.

Tri-gate fully-depleted substrate transistors have a raised plateau-like gate structure with two vertical walls and a horizontal wall of gate electrode.

This three-dimensional structure improves the drive current while the depleted substrate reduces the leakage current when the transistor is in the 'off' state.

Reducing the leakage current in the off state not only helps control heat at the circuit level but also translates to increased battery life in mobile devices.

Researchers are developing low-power circuit design techniques for microprocessors as feature sizes fall below 100 nm. As feature sizes decrease, transistors leak power, even when they are turned off.

Controlling power leakage is an important design consideration in future generations of microprocessors.

A number of these techniques have been tested on a prototype arithmetic logical unit (ALU). By dynamically adjusting the voltage applied to the body of a transistor (bias), researchers can manipulate the transistor's threshold voltage - the voltage at which the transistor turns on.

Increasing the threshold voltage reduces the leakage but also reduces performance. Having localised control of the bias voltage enables the designer to make real-time tradeoffs between the circuit performance and power it consumes.

This capability can be used to reduce leakage during periods of inactivity or to increase performance during peak use.

The dynamic sleep transistor is another technique that adds a transistor in series with the power supply that can be turned off when a block of logic circuitry is in idle mode, thus reducing leakage.

This power-saving feature is expected to be introduced into 65 nm technology products.

For example, a large portion of 65 nm microprocessors is occupied by static random access memory (SRAM), which is used to cache data and instructions.

The sleep transistors can shut off large blocks of idle SRAM to eliminate wasted power.

Power efficient larger-sized caches could improve performance by increasing the data bandwidth and reducing latency.

Intel's microarchitecture researchers are exploring new architectures that are 'conscious' of power and thermal challenges and able to manage them dynamically while running applications.

As a result, the company is investigating multi-core-based microprocessors, clustered microarchitectures, and other power-optimised microarchitectures.

Dual-core microarchitectures and an examination of the concepts of clustered microarchitectures to increase overall performance while managing power more effectively are continuing.

This approach focuses on CPU cores and clusters that perform optimised load balancing through a combination of software and hardware mechanisms that examine the use, priority and thermal characteristics of a workload.

As a result, Multicore with Multithreading architecture can increase the performance without significant increase in either the baseline frequency of operation or the net power consumption of the device.This technique provides the flexibility for each core to run at a different frequency or voltage.

Core hopping mechanisms can be used to spread the power dissipation over a greater area, and some of the cores could be dedicated for special purpose operation to achieve power efficient performance increase.

Microprocessors include innovations and instruction set extensions that provide more performance for less power by squeezing more useful activity into each machine instruction. Efforts to eliminate redundancy at the microarchitecture level by identifying frequent instruction sequences, extensively optimising them and storing them for later reuse are continuing.

NetBurst microarchitecture has an advanced form of an instruction cache called the execution trace cache, which stores the already decoded machine instructions or micro-ops for future reuse.

Hyper-threading, available in the Intel mobile and server processors, increases performance without significantly impacting the power envelope.

The Enhanced Intel SpeedStep Technology dynamically scales frequency and voltage according to how much processing power is needed, significantly extending battery life in mobile systems and reducing usage power in desktop and server systems.

Demand-based switching is a server version of this technology and is being deployed in the Xeon and Itanium 2 processors.

The possibility of extending this technology to other system components is being explored to further reduce overall power consumption.

Dual-core and multi-core architectures

Today, Moore's Law continues to benefit function, power, performance and cost. Intel's industry leading process technology produces chips that integrate billions of transistors.

These transistors are used to provide consumers with increased functionality and flexibility through power-efficient platforms like Centrino Mobile Technology and power-efficient technologies that improve performance such as hyper-threading technology (HTT).

HTT, integrated into Xeon processor family server products and Pentium 4 desktop products, enables two independent threads that allow one physical processor to appear and behave as two virtual processors to the operating system.

HTT allows an increased number of applications to run more seamlessly together, like simultaneously CD burning, video streaming and virus scanning, all with uncompromised performance and power efficiency.

HTT was conceived as a seed vehicle for a future vision of multi-everywhere computing architectures.

Dual Core microprocessors continue this vision.

Improved process technologies like 65 nm are enabling larger numbers of transistors to be integrated in a given die size. Enabling processes and techniques that allow integration of additional transistors into existing power envelopes allows creative decisions on what to do with the additional transistors.

Key directions Intel could be spending this increased transistor budget are new low power architectures such as multi-core - where multiple microprocessor cores are integrated onto a single chip - or new innovative features such as virtualisation and security capabilities - allowing higher levels of trust and robustness in computers.

Considerable resources are put into packaging-related research and development to move heat away from the silicon surface itself.

One innovation being explored involves eliminating the bumps of solder that make the connections between the package and the chip. This small change would reduce the packaging thickness, thereby enabling the processor to run at lower voltage and allowing for thinner form-factor devices.

Targeting hotspots and efficiently removing heat from the die are goals being pursued.

Intel systematically studies the components of all types of systems, from mobile phones to servers, to identify where and how power can be reduced or managed more effectively to improve the overall power performance of the system.

For notebook users, the company developed mobile voltage positioning (IMVP) to optimise voltage regulation for the Mobile Pentium 4 processor. Also created was dynamic voltage management to enable developers of components for microprocessors as well as wireless and handheld products to scale frequency and voltage dynamically, adjusting performance to application needs.

In addition to improving their own designs, the company advises power component suppliers on how to design more efficient voltage regulators and also strives to move the industry towards lower voltages on commodity components, such as flash and DRAM.

The company works with the manufacturers of LCDs to improve the efficiency of the electronics that drive the backlight. Optics experts also provide recommendations to the industry on how to design for greater power efficiency in passing light to the front of the display.

For wireless and handheld devices, the company is helping to drive to an industry standard for very low signal swing serial interface for displays, which potentially could reduce power loss across the interface by an order of magnitude.

To improve heat dissipation, Intel enables new heat sink technologies for its processors as well as focusing on other system components that must be cooled, such as graphics controllers and chipsets.

New chassis designs are enabled by building system models and running airflow analysis on them to improve airflow and cooling. In small form-factor platforms, thermal profiling identifies power hotspots in the system design early on when it is more cost effective to make changes to address thermal challenges.

Design ideas are provided to original equipment manufacturers and original design manufacturers as well as other enclosure manufacturers to help them achieve more effective thermal design and engineering.

The company is working with the National Resources Defense Council (NRDC) and the US Environmental Protection Agency (EPA) to institute industry-wide acceptance of new power supply guidelines intended to reduce the overall power consumption of desktop PCs.

Power management has two vectors: reducing active, or 'in use', consumption and reducing consumption when a device is at idle.

The focus in idle state power management allows keeping the vital functionality of a small factor device active while the rest of the device is in 'sleep' mode, thereby reducing active power.

Researchers are developing a low-power state called hypernate, which will achieve power saving close to advanced configuration and power interface 'suspend to disk' mode but with much lower resume latency.

As utility costs increase and higher density platforms become available, it becomes more and more important to develop technologies and operational strategies that improve power and thermal control at the rack and data centre levels.

Two concepts called Pconfig and power supply management interface (PSMI) that give the IT industry ways to better manage data centre rack density are being developed.

Using Pconfig, the maximum power load based on the configurations of the systems in the rack can be calculated. PSMI allows tracking of actual power consumption using a console manager.

By using power supplies with monitoring capabilities, IT can track the server usage to provide an additional safety margin with Pconfig, and also interact with the heating, ventilation and air conditioning systems to achieve more effective thermal efficiency within the data centres.

To help OEMs create power-efficient systems, Intel builds a variety of reference designs. These hardware models include recommendations for where to locate the processor, memory, heat dissipation technology and other components to optimise the power efficiency of the system.

The holistic approach to power challenges also extends to software.

By offering developers tools such as the VTune performance analyser, which optimises software code so that it takes less time to execute a given task, overall power consumption is lowered.

Recommendations are made to component manufacturers on how to minimise common programming errors that prevent the operating system from managing power effectively and encouragement is offered to application software developers to modify their programs to enable users to trade off application qualities in favour of saving power.

Mobility software-enabling collateral shows developers how to optimise their applications to improve user experience and extend battery life.

Related Articles

Developing motion analysis algorithms

Elite athletes want to return to match play as quickly as possible after injury. As a result,...

Manners maketh man

We've all met 'em. Those blundering, inconsiderate idiots walking our pavements with...

Red dye powers ‘green’ battery

Rose madder - a natural plant dye once prized throughout the Old World to make fiery red textiles...

  • All content Copyright © 2020 Westwick-Farrow Pty Ltd