What happens when computers stop shrinking

This article is a condensed excerpt from Michio Kaku’s new book, “The Physics of the Future.”

I remember vividly sitting in Mark Weiser’s office in Silicon Valley almost twenty years ago as he explained to me his vision of the future. Gesturing with his hands, he excitedly told me a new revolution was about to happen that would change the world. Weiser was part of the computer elite, working at Xerox PARC (Palo Alto Research Center, which was the first to pioneer the personal computer, the laser printer, and Windows-type architecture with graphical user interface), but he was a maverick, an iconoclast who was shattering conventional wisdom, and also a member of a wild rock band.

Back then (it seems like a lifetime ago), personal computers were new, just beginning to penetrate people’s lives, as they slowly warmed up to the idea of buying large, bulky desktop computers in order to do spreadsheet analysis and a little bit of word processing. The Internet was still largely the isolated province of scientists like me, cranking out equations to fellow scientists in an arcane language.

There were raging debates about whether this box sitting on your desk would dehumanize civilization with its cold, unforgiving stare. Even political analyst William F. Buckley had to defend the word processor against intellectuals who railed against it and refused to ever touch a computer, calling it an instrument of the philistines.

It was in this era of controversy that Weiser coined the expression “ubiquitous computing.” Seeing far past the personal computer, he predicted that the chips would one day become so cheap and plentiful that they would be scattered throughout the environment – in our clothing, our furniture, the walls, even our bodies. And they would all be connected to the Internet, sharing data, making our lives more pleasant, monitoring all our wishes. Everywhere we moved, chips would be there to silently

carry out our desires. The environment would be alive.

For its time, Weiser’s dream was outlandish, even preposterous. Most personal computers were still expensive and not even connected to the Internet. The idea that billions of tiny chips would one day be as cheap as running water was considered lunacy.

And then I asked him why he felt so sure about this revolution. He calmly replied that computer power was growing exponentially, with no end in sight. Do the math, he implied. It was only a matter of time. (Sadly, Weiser did not live long enough to see his revolution come true, dying of cancer in 1999.)

The driving source behind Weiser’s prophetic dreams is something called Moore’s law, a rule of thumb that has driven the computer industry for fifty or more years, setting the pace for modern civilization like clockwork. Moore’s law simply says that computer power doubles about every eighteen months. According to Moore’s law, every Christmas your new computer games are almost twice as powerful (in terms of the number of transistors) as those from the previous year. Furthermore, as the years pass, this incremental gain becomes monumental. For example, when you receive a birthday card in the mail, it often has a chip that sings “Happy Birthday” to you. Remarkably, that chip has more computer power than all the Allied forces of 1945. Hitler, Churchill, or Roosevelt might have killed to get that chip. But what do we do with it? After the birthday, we throw the card and chip away. Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon.


1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)



What happens when computers stop shrinking