Home > 5G Special > Nanotechnology and the Future of Computation, Storage and Perception

## Authors

### Navakanta Bhat

Professor and Chair, Centre for Nano Science and Engineering, Indian Institute of Science, Bangalore

## 1 Historical Perspective and Current Status

The continued miniaturization of devices in the nanoscale regime, and the capability to manipulate the matter at these dimensions is expected to revolutionize the future systems for computation, storage and perception in the next few decades. Nanotechnology is not just a natural evolution of the miniaturization trend from sub-100 micrometer scale to sub-100 nanometer scale. The emergence of quantum effects at nanoscale, with a significant departure from the continuum approximation of physical, chemical and biological processes, brings in exciting new possibilities with nanotechnology. In the next few decades, we will go beyond the conventional charge based, digital Silicon CMOS technology, and incorporate several emerging technologies that exploit nanoscale phenomena, to realize extremely powerful machines for high performance computation with augmented perception, mimicking the human brain and sensory organs.

Figure 1 depicts the key milestones in the evolution of compute engines. The bulky and power hungry vacuum tubes used in one of the early digital computers, ENIAC, resulted in rudimentary computation capabilities with the computer weighing 30 tons and consuming 200 kW power. This was certainly not a scalable technology. The invention of semiconductor transistor in 1947 was an inflexion point in the history of miniaturization. This was followed by the invention of the first integrated circuit (IC), a decade later in 1958. However, most of the early ICs were only memory chips and the community was concerned as to what one would do with all those storage devices. Then, the first microprocessor IC invented in 1971, changed the landscape completely. The Intel 4004, a 4 bit microprocessor was realized on 10 $\mu\mbox{m}$ PMOS technology, with a chip size of 12 $\text{mm}^2$ and power consumption of 1W. This was soon followed by migration to NMOS technology (Intel 8080 in 1974) and CMOS technology (Intel 80386 in 1985). As exemplified by the famous Moore’s law, the miniaturization trend has continued with CMOS technology scaling, resulting in a new generation of manufacturing technology introduced every 2 to 3 years. This technology scaling, coupled with several innovations in system and circuit architectures, has fuelled the growth of more and more powerful compute engines over the years. For instance, in 2013, the CMOS technology went through another big change with the introduction of 3 dimensional (3D) FinFETs, departing from the conventional planar MOSFETs (Figure 2). By this time, the CMOS technology was also enabled by innovations in nanomaterials technology such as strained Silicon-Germanium channel, $\mbox{HfO}_2$ high-k gate dielectric with atomically engineered interface. On the architectural front, the introduction of multi-core processors, brought in an unprecedented computing capabilities, even to the hand held devices.

The Everest chip from XILINX, on 7nm CMOS technology with 3D System on Chip fabric, packs 50 billion transistors on single chip, illustrating an amazing technological achievement. It should be recognized that in conjunction to miniaturization, the integration of exponentially large number of transistors is primarily responsible for today’s high performance compute and storage chips. As shown in Figure 3, over the last 5 decades, while the “feature size” of transistors has scaled down by 3 orders of magnitude $(10^{-3})$, the number of components on chip has been increased by 7 orders of magnitude $(10^7)$. In conjunction with migration from microtechnology to nanotechnology, we have also moved from Small Scale Integration (SSI) to Very Large Scale Integration (VLSI) or Giga Scale Integration (GSI).

Pages ( 1 of 3 ): 1 23Next »

This site uses Akismet to reduce spam. Learn how your comment data is processed.