A QUARTERLY PUBLICATION OF ACCS
Integrating Physical, Cyber and the Biological Worlds: Interview with Prof. Jan Rabaey


Debabrata Das, Professor at IIIT-Bengaluru (IIIT-B)

Prof. Debabrata Das of IIIT- Bangalore engages in a conversation with Prof. Jan Rabaey, Professor, EECS, Berkeley University, in an interview recorded during Prof. Rabaey’s recent visit to India. The two discuss the future of wireless research and emerging opportunities.

Prof. Jan Rabaey, is the founding director of the Berkeley Wireless Research Center (BWRC) and the Berkeley Ubiquitous SwarmLab. He has made significant contributions in advanced wireless networks, low-powered integrated circuits, sensor networks and ubiquitous computing.

Excerpts from the interview.

Prof. Rabaey on his research path

I’ve been on the faculty in Berkeley for more than 30 years and my area of interest has always been on the impact of wireless in general on how we as a society will connect together. So let me go back a little bit in time to the early 90s when it became clear that wireless technology would become very important and ubiquitous. A colleague and I set out to solve the question: Suppose you have ubiquitous wireless connectivity and if wireless would be everywhere, how would you build a computer? Would you still build a computer? And the answer we came up was no you don’t build computers anymore, you build user interface devices. Computing will be done somewhere else and you carry the device with you, a portable device, that allows you to interact with the computing in the background. We called that project InfoPath. It was a pad-like device with pen and voice input. Remember this was 1990, a time you had no cellular phones yet. The best wireless technology available was 802.11 that gave us about 1 megabits per second data rate. We built this device to show that indeed it was plausible and as always technology takes time to pick up and before it comes to the marketplace. But obviously 10 years later we got iPads and we got smartphones and all those kind of things. My quest always has been whether I can make this thing smaller, more effective, more ubiquitous. From there on, I move to the domain of what we now call wireless sensor networks. If I say I can make my radio small, I can build a node that has a computer as a wireless interface, has some sensors and connect them all together in a large network. We have a set of projects that you can indeed build something like that at a cubic centimetre and then later scale it down to smaller and smaller sizes. This led to things like the Internet of Things or industrial 4.0, the idea that we are going to sprinkle the world with sensors that are connected and connect information — connect the cyber world to the physical world around us. That was the next step.

Then around mid-2000s, we said okay what happens if we go from a cubic centimeter to a cubic millimeter? What could I do at that point in time? That brought me into the whole sphere where I can have electronics talk directly to biological cells, could be neurons or it could be electrical fields in your body, variable devices of all styles. So it’s kind of been a progression of going to smaller and smaller and lower and lower power devices but ultimately extending the application scope initially from pure communication or data gathering to cyber-physical systems ultimately to cyber biological systems. That’s kind of the road that I’ve been following over the years

 

Integrated system of wireless devices. The seamless integration of Physical world, Cyber world and the Biological world

In the past we were always thinking about individual devices with a particular function and the function was in the device. With wireless technology it is possible to have a broad range of devices that are scattered around in the environment to form a network and the function is not anymore in the devices but is in the global network itself. The network provides the function; it provides capabilities of gathering data of different types and nature but also from different places; combine it together either centralized or in a distributed fashion; reason about it; think about it and act so we are actually starting to make intelligent networks. This is where we’re going, for instance with 5G as the next step. 5G promises two things. On one side it’s going to give us higher data rates — remember the one megabits per second of 1990s, now we’re talking about 10 gigabits per second. The types of information you can present and transmit out is going to be so humongously large that I can start thinking about immersive worlds, real augmented reality and all those type of things are going to be possible. On the other side, the capability of connecting millions to billions of devices together allows you to think about a world that is completely filled up with devices connected together, giving us a direct link between the physical world around us and the cyber world which used to be separate; they become one thing; they’re becoming indistinguishable. People talk about digital twins, they are possible. This gigantic network of devices together and their ubiquitousness are giving us new functionality, new capabilities and you go one step beyond to the biological world. With the sensors on our body or inside our body we should be able to get a precise vision on how our body is operating, helping us to correct things that are wrong. It will give us more functionality and capabilities that we don’t have today, you call it augmentation but this will happen as a result of this gigantic wireless connected network. We’re not there yet, we have to think 10-15 years forward but this to me something that is clearly on the roadmap.

On brain machine interface

The first set of sensors that people have been building was about the parametric behavior which measures heart rate and other things and you figure out how things go. If I can make my electronics small enough about the size of a biological cell, I should also have the capability of creating a direct interface between a neuron and an electronic device. I can have neurons in our brain talk to electronic devices; transmit the information out and use that to do certain functions that I can describe in a minute. The idea of brain machine interface is to create a channel between our biological computer that we have in our head, create a direct channel to read out information or to write information back. There’s a whole bunch of medical reasons that make this important such as lost motor capabilities because of accidents, spinal cord injuries, which snaps communication between the limbs and the brain. If I can read the signals in the motor cortex and transmit them out to either stimulate the limbs or drive a prosthetic device, I can give motion back to people who lost it. Same thing for instance with people who have Lou Gehrig’s disease where the ends of nerves die off and basically the brain is now left disconnected from the rest of the body. Prof. Stephen Hawking suffered from it. We could have had an interface that latched on to this speech region, motor region of the brain; capture those signals and drive them into speech synthesizers; we can make people with those type of disease speak again and communicate with the rest of the world. Similar is true for mental diseases, post-traumatic depression, all those type of things could be addressed if I have effective brain machine interfaces.

We have demonstrated on primates [in the lab] as well as humans, this technology is indeed feasible. However, the technology today is too bulky, it doesn’t last very long, we don’t want to have a cable coming out of the head while the person walks around on the street. We have to make sure that it is a technology we can implant; that can last for at least 10 years; is powered not by a battery and it has to have a wireless interface. All those factors together require another form factor in shrinkage, reliability, effectiveness. These things need to happen before we have effective brain machine interfaces that humans can carry with them for tens of years and again plenty of research still to be done but the roadmap is clear.

On the evolutionary cycle of hardware

Things go in cycles. Hardware is actually in the upswing again. It’s getting more important and the main reason for that is indeed things like IoT, biomechanical interface, self-driving cars, all those type of things are built on innovative hardware platforms. One can argue that Moore’s law is ending and transitional scale does not matter anymore. Ultimately what we’re going to be doing over the next couple of decades is build more sophisticated device, devices that have more functionality, better sensing capability, computing, communication, energy harvesting, storage, translation and all in smaller and smaller form factors and higher efficiency. We might see devices that directly communicate with a biological tissue where you interweave biology and physical elements together.

There’s no question in my mind that hardware design capabilities are going to keep on going very strongly over the next couple of decades. It will be different. It will not be the same thing as building digital circuits by putting gates together. It’s broader where innovation will reign and how we bring things together going into third dimension, say, for example, stack memory on [top of] logic [circuit] and sensor, interweave them together. These are the type of things we need to think about and that’s going to require new methodologies, new ways of production, manufacturing, testing and verification. All those type of things are going to be challenged, so there’s plenty of things to be done. I’m a strong believer that hardware will shine in the next decades.

On low-power devices

One way I’ve described my research over the years is how low can you go which is sometimes not the right thing to think about but there’s still plenty of opportunity. Today we know what the limits are, how deep you can go and we still away 4 to 5 orders of magnitude from where we could be. Nature has found some ways to get closer. If I look at biology, some of our cells in the synapses in the brain work at about two orders of magnitude closer than the absolute limit, which means we should be able to get close to that model. There’s plenty of opportunity to rethink computing so that we get actually more efficient, there’s plenty of place to go but it’s not going to be with our traditional, deterministic computing instruction set style, machine learning devices. Its not going to be easy.

You can watch the video on ACC Youtube channel: https://youtu.be/7GPeXB1XhVo