Component design: what comes after Moore's law?
Moore’s law, which has predicted the rate of technological advancement for the past 50 years, is coming to an end. Engineers now have the luxury of taking the time to optimise, which should result in more efficient systems at a given process node. What does this mean for component design?
Since the 1970s, Moore’s law has successfully predicted that the number of transistors per unit area on a chip would double every two years. This growth was possible because the cost per transistor went down with every new generation of silicon. However, once we reached transistor sizes of 28 nm — and particularly after we crossed the 20 nm threshold into fin field-effect transistor (FinFET) territory — this once-standard rate of technological progress began decelerating as lithographic light sources failed to keep pace. This trend forced manufacturers into a Faustian bargain between adding more process complexity using traditional light sources or decreasing throughput with new light sources. Either way, even though manufacturers can still technically produce chips at even finer line widths, it’s less economically viable to do so because the cost per transistor is starting to rise.
As Moore’s law begins to slow down, the electronics industry must consider new technologies and novel approaches to systems design. Systems integrators now have the luxury of time to optimise and refine architectures. For the first time in quite a while, they are incentivised to move to the latest generation process only if they require enhanced performance or greater transistor density (which comes at a higher price instead of getting both for less). As a corollary, value-conscious market segments will continue to use older processes while looking for new ways to accommodate a lower performance in order to hit the price targets that the market will bear.
It used to be that if it took more than two years to optimise a system by customising an interconnect fabric, parallelising the code, and ultimately building a supercomputer around a given process node, that the project would be considered dead on arrival. The reason is that by the time it was finished, a new computer would already be on the market that would run twice as fast and cost about the same. But now that the succession of process nodes on a two-year timescale is no longer a given, system-level designers and system integrators looking to deliver more value must pay closer attention to factors like bandwidth limits, data locality, core utilisation and power efficiency. As a result, optimisation will become a more common refrain among engineers, which may ultimately result in more efficient systems for a given process node.
For example, intriguing application-specific architectures (ASICs), such as the Tensor Processing Unit (TPU) that Google and its artificial intelligence (AI) partners have created to accelerate machine learning workloads, are starting to appear. To service machine learning computational loads, engineers have spent time and effort turning out architectures with massive arrays of 8-bit multipliers and adders that can calculate with less precision than a graphics processing unit (GPU) but that are perfectly suited for machine learning applications. Until recently, by the time you finished such a project, regular computers running at twice the speed would have already outstripped all the advantage of a custom-built ASIC. Now, in the post-Moore’s law world, it has become practical — perhaps even imperative — to build ASICs that achieve far greater efficiency and performance.
Greater design optimisation is also evident in the mobile space, which is extremely power sensitive. You’ll find that algorithms implemented in software in the CPU are inevitably going to cost much more in terms of power than algorithms reduced directly into silicon gates. In a Moore’s law regime, it would not be worth your while to cast an algorithm into silicon, because first, algorithms would constantly evolve to take advantage of faster and greater numbers of transistors, then the standards would change, and then a faster general-purpose CPU would come along and blow your custom silicon away. All your investment would have been wasted. However, in a post-Moore world, as speed and density increases level off, it’s less risky to cast malleable software algorithms into silicon and save some power.
Software developers are finding greater incentives to optimise as well. In the late 1990s and early 2000s it was okay — nay, expected — to pack so many new features into a software release that it could barely run on average machines because within a couple years Moore’s law would take care of any performance deficiencies. Today, this sort of practice would not be bearable. Instead, now engineers can spend months to refine performance without worrying too much about being obsoleted by a legacy codebase running on a new CPU, with significantly faster single-threaded performance.
Open hardware and boutique hardware engineering will enjoy increased prominence, too. Open source projects typically take several years to take off and reach maturity. Unfortunately, in a Moore’s law world, only hobby projects can afford to take three to four years to take off. Now it’s conceivable that after spending three or four years implementing something in a field-programmable gate array (FPGA), you may find that you might actually have the world’s best-performing product for that specific problem domain. We’re not quite at that stage for general problems, because Moore’s law is still moving a little bit upward, but this could become a reasonable general expectation if Moore’s law continues to slow.
The design tool and chip-building ecosystems will also have reasons to become more open in this new world. Historically, there has been a real economic benefit from timely access to the latest technologies, so foundries typically have not made their latest processes available to everybody, just to those who could pay the most for it. However, as the shine wears off of a new process node and replacement nodes seem farther and farther away, the greatest economic benefit to a foundry will shift from charging a premium just to get into the door to opening the doors to get more customers into the shop. Thus, once the initial investment of building a fab depreciates, the market logic can shift towards a more open model, where the non-recurring engineering (NRE) costs are more reflective of the mere time and materials required to set up a fab run, rather than the amortisation of a multibillion-dollar investment.
In the future, we may just have a big tent under which everyone can come together, including more designers who are making more application-specific circuits. No longer will you have to go to a big supplier for your analog components. Instead, you might be able to simply pay someone a few thousand dollars for a design, then pay a reasonable price for a spin of a few wafers, to have exactly the chip you require for your application. We are looking forward to seeing what novel sensors might crop up as access to ultralow power and dense transistors becomes increasingly affordable.
The end of Moore’s law points to the creation of a new regime, with ‘brand-new’ market incentives influencing the rate of technological advancement. This new regime is altogether different from how the electronics industry has been operating for the past several decades. It is a disruptive development (to be sure), but one that offers intriguing possibilities for the future.
For more information, visit www.mouser.com.
Every wireless device needs an antenna, so here are some of the factors that add up to make a...
Without an NoC, a chip could need up to 10 times more memory to operate without latency, which...
Researchers have created a user-friendly, highly integrated GaN voltage converter in a compact...