Futurist Ray Kurzweil made a thought-provoking presentation at the recent MD&M in Anaheim, a trade show for medical device companies. His presentation dealt with the accelerating rate of technical progress, predicting the future, and the human life span. Let’s start with one of his discoveries, that the technical future is predictable and with good precision, a rule Kurzweil calls the Log of accelerating trends.
Kurzweil began with the comment that technical developments form predictable trajectories, and those trajectories are exponential. Consider the progress of the computing industry, and a graph that starts with the 1890 American census and progresses to 1980. The calculations possible per second versus year formed a remarkably smooth curve. “It was doubly exponential. That is, the 1980 census was many times more the price/performance of the 1890 census. The progress went up through thick and thin, of war and peace,” said Kurzweil.
“These are some of the implications of what I call The log of accelerating trends. For medical designers, the technology is getting smaller and that’s another exponential trend. So, this computer here (referring to his cell phone) is several billion times more powerful per dollar than the computer I used as an undergraduate at MIT. I went to MIT because it was so advanced that it actually had a computer in the late 60’s. It took up the floor of a building. Still, this cell phone is thousands of times more powerful, and one millionth the cost. That’s a several billion-fold increase in price performance. The phone is also a tiny fraction of the 1960 computer’s size,” he said.
Both of those things (smaller and more powerful) will happen again over the next 25 years. This will produce products a hundred thousand times smaller, a billion times more powerful, and price-performance that gives you some idea of what will be feasible.“
Both scales in the accompanying graph are logarithmic. “As you go up the Y axis, each labeled level is a hundred thousand times greater than the level below it. We’re not adding to what we’re measuring, we’re adding zeros, calculations per second per constant power. People look at this and say, ‘Ah, Moore’s Law,’ but Moore’s Law is just a part on the right. It was the fifth paradigm, and not the first to recognize the exponential growth of computing,” said Kurzweil.
For instance, vacuum tubes were shrinking in the 1950s. Every year in the 1950s, industry made the vacuum tubes smaller to keep this exponential growth going. That hit a wall in 1959. “We could not shrink the vacuum tubes anymore and keep the vacuum. That was their end, but it was not the end of the exponential growth of computing. We just went to the fourth paradigm, transistors, and then to microprocessors.”
Exponential growth started decades before Gordon Moore was even born. “It’s a doubly exponential curve because a straight line on a logarithmic scale is exponential growth. People have been saying for a long time, ‘Moore’s law is going to come to an end.’ It will in 2020 because its key features then will be five nanometers, which is the width of twenty carbon atoms, and we won’t be able to shrink them anymore,” he said.
However, enter the sixth paradigm, which is computing in three dimensions. “Our brains are organized in three dimensions which is really one key source of its power. Most interesting thing about this is: Where’s the slowdown in World War I, World War II, the Cold War, or the Great Depression? People say, well, it must have slowed down during the recent recession. I’m sure this group (the audience) realizes, that it’s not the case. It has a mind of its own. But it’s really the empirical evidence that’s the most persuasive. Just look at how clear a curve this is. I have data from 1980 projected out to 2050, we’re now at 2016, thirty-five years later, and it’s exactly where it should be. This aspect of the future is remarkably predictable.”
Timepublished a cover story on Kurzweil’s Law of Accelerating Returns. “Its editors wanted to put my computer graph in the magazine along with a particular computer it had covered in the magazine as the last point. It’s the last point right on the curve, (referencing the accompanying curve) which I had laid out thirty years earlier. This aspect of the future is just amazingly predictable,” he said.
It’s not just Moore’s Law and it’s not just computing. It pertains to any information technology. For instance, you could buy a transistor for $1 in 1968. Today, you can buy ten billion for a dollar, and they’re better because they’re smaller so electrons have less distance to travel, so they’re faster. The cost of a transistor cycle, which is a measure of price/performance of all of electronics, doubles every year. “You can get the same computation, communication, genetic sequencing you could last year, for half the price, this year,” said Kurzweil.
Economists actually worry about deflation or falling prices, and the curve does represent a 50% deflation rate. “But it’s just for the part of the economy having to do with information. That part of the economy is gradually expanding. “All of health and medicine is now information technology. That was not the case 15 years ago. An enabling factor for that was the Genome Project and it was a perfect example of exponential progress. We’ve doubled the amount of genetic sequencing each year,” he said.
Seven years into this 15 year genome project, participants announced they had finished 1% of the genome. Mainstream critics said, “I told you this wasn’t going to work. Here you are, seven years later and only 1% done. It’s going to take seven hundred years just like we said.” That was linear thinking.
Kurzweil’s reaction at the time was: They are wrong. “We finished 1%, so we’re almost done because 1% only is seven doublings from 100%.” Indeed, he says, it continued to double every year and was finished seven years later. That progress has continued since the end of that project. Consider: the first genome cost $1 billion. We’re now down to a few thousand dollars per genome.
It’s not just collecting this object code of life, or sequencing, that is doubling every year and growing exponentially. “Our ability to understand this data, to model it, to simulate, and to reprogram it, to change this ancient software, is also growing exponentially. These two have been doubling every year. These technologies are therefore a thousand times more powerful than they were a decade ago when the genome project was completed. They’ll be another thousand times more powerful in 10 years, a billion times more powerful in 20 years.”
This suggests a grand transformation of medicine. You can, for example, turn off the fat insulin receptor gene, which is one these little 23,000 little software programs we have inside us called genes. The fat insulin-receptor gene says, “Hold on to every calorie” because the next hunting season may not work out so well.” That was a good idea ten thousand years ago. You worked all day to get a few calories. There were no refrigerators, so you stored them in the fat cells of your body. Today, that underlies an epidemic of obesity and type II diabetes. “I would like to tell my fat insulin receptor gene, ‘You don’t need to do that anymore. I’m confident the next hunting season will be good’ … at the supermarket.”
In their day, our ancestors ate ravenously and they remained slim. They got the benefits of being slim such as living longer. A diabetes center Kurzweil mentioned is working with a drug company to bring that effect to the human market. “I’m working with a company and we’re working with patients that have pulmonary hypertension, a terminal disease caused by a missing gene. The treatment goes like this: A doctor scrapes lung cells from the throat, adds a new gene that the patient is missing, replicates several million of them, and injects millions of cells with that patient’s DNA along with the gene they’re missing. This has actually cured this terminal disease,” he said.
Another example comes from of a girl who benefited from four or five different exponentially growing information technologies. She had a damaged windpipe and would not survive without this development. Doctors scanned her throat with non-invasive imaging – the spatial resolution of which is doubling every year – and modeled her a new windpipe using a CAD system, and printed it out with a 3D printer using biodegradable materials. “Then they added to that scaffolding, with the same 3D printer, modified stem cells that grew into a new windpipe with the scaffolding. That was surgically installed. It worked fine.”
Kurzweil said there are many examples of similar procedures at the edge of clinical practice. “It’s been a decade since the completion of the enabling factor, the genome project. You can now fix a heart broken from a heart attack. (One broken from romance may take some development in virtual reality, he jokes.) My father, for example, had a low ejection fraction after a heart attack. That was the ’60s when we didn’t know anything about stem cells. Now, it is possible to modify stem cells to rejuvenate the heart. The procedure is not yet available in the U.S., but you can get it in Israel or some other countries. Healing hearts is now just a trickle of the actual clinical impact, but it will be a flood over the next decade. It’s right on the edge of clinical practice.
He returned to the topic of deflation. “The world economy suffered from massive deflation during the Depression in the 1930s, along with the collapse of consumer confidence. But if you cut the price of something in half, most people will buy more of it. That’s economics 101. But what if I actually double my consumption year after year to keep up with this 50% deflation rate? If I don’t, the size of the economy will shrink … not in bits, bytes, or base pairs, but in terms of currency. For a number of good reasons, that would be a bad thing. The good news is: That’s actually not what happens.”
We have more than doubled our consumption of computer memory. “There has been 18% growth in constant currency in every information technology each year for the last 50 years despite the fact you can get twice as much of it each year for the same price. What’s the reason for that? It’s what you’re all involved in: Innovation. Creating new capabilities as the price/performance makes new capabilities feasible.”
Consider: Why were there no social networks ten years ago? Was it because Mark Zuckerberg was still a junior in high school? No. The reason: It was not feasible. The price/performance wasn’t there. “There were attempts to do it, and then arguments such as: ‘Can we afford to let our users download a picture?’ It just couldn’t be done. Six or seven years ago it became cost effective and took off.”
In the early ’80s Kurzweil says he noticed that the ARPANET (the Advanced Research Projects Agency Network), created by the Department of Defense, connected a thousand scientists. The number of users doubled every year. “I did the math and thought, ‘Whoa, this is going to be a worldwide web connecting hundreds of millions of people to each other and to vast knowledge resources by the late ’90s.’ Others thought that was ridiculous because the Department of Defense could only tie together one or two thousand scientists in a year. But the impact of exponential growth took over and it did happen.”
Kurzweil said he saw a need for search engines because you couldn’t find anything. The computational and communication resources to create an effective search engine would be in place by the late 1990s. “Now what I could not predict, was that among the 50 different projects to create an effective search engine in the late ’90s would be a couple of kids in a Stanford dorm who would take over the world of search. But the fact that search engines would be needed and would be feasible, was predictable.”
The internet keeps data traffic doubling every year. On the right (referring to a slide), that’s the number of bits we move around wirelessly in the world. Over the last century starting with Morse code, through AM radio, and today it’s 4G networks. It has been a trillion-fold increase but again, look at how predictable a phenomenon that is. Here’s that graph of the ARPANET in the early ’80s, and shows the exponential growth. That’s a logarithmic graph representing a billion-fold increase since the early ’80s. On the right is the same data, but on a linear scale which is how we experience it. To the casual observer, it looked like the worldwide web was a new thing. But in the late 1990s, but you could see it coming.
“I’ll point out a difference between linear and exponential growth. Linear growth is what we have hard-wired in our brains. If you wonder why we have a brain, it’s to predict the future so we can anticipate the consequences of our actions or inaction. But the kind of challenges we had when our brains were evolving tens of thousands of years ago were linear. At that time you might look up and say, ‘Whoa, that big animal is going that way, I’m going a similar direction on this path, and we’re going to meet at that rock. That’s not a good idea. I’m going to take a different path.’ That turned out to be good for survival. That became hard-wired in our brains. We made a linear prediction that worked very well. Predicting linearly is our intuition,” said Kurzweil.
“The primary difference between my critics and me is we look at the same reality, and then they apply linear expectations and linear intuition to the future. However, the reality of information technology is exponential.”
For instance, linear projections let our intuition count one, two, three, four. Exponential trajectory, that’s the reality of information technology, goes one, two, four, eight. It doesn’t sound that different, but by step 30, the linear prediction is at 30 while the exponential trajectory is at a billion. At step 40 it’s at a trillion. It’s not matter of speculation about the future.
“We’re also seeing a revolution with medical devices. You can now connect a computer into your brain if you are a Parkinson’s patient. That work has progressed exponentially. The first devices, which were well over a decade ago, were about the size of my cell phone and required major surgery to insert. It was inserted to one point along with a fairly simple computer and software,” said Kurzweil.
Now the devices are the size of a tiny pea and can be implanted with minimally invasive surgery, connect to many dozens of points, and has pretty sophisticated software. Today it is possible to communicate wirelessly with the device, download new software to the neural implant connected into your brain and from outside the patient. That’s today. “We’re shrinking these technologies at an exponential rate. They will be the size of blood cells in the 2030s,” he said.
Supercomputers are another example of how the trend marches along at an exponential pace. Consider this: To functionally simulate the human brain will need to perform 10^14 calculations per second. “That is my estimate but also from a number of independent estimates. We passed that rate with supercomputers a decade ago. We’ll pass that with a personal computer by the early 2020s. The software, however, will take a little longer. I’ve been consistent saying the software to pass a valid Turing test to emulate human intelligence, particularly with phenomena like language, will be available around 2029.”
Three-D printing should be of interest to medical designers because there are valuable niche applications, particularly in printing out organs and tissues. “A company I am familiar with is now actually 3D printing scaffolds for hearts, kidneys, and lungs using biodegradable materials and populating them with stem cells. The company has been successfully installing these manufactured organs in animals, and gearing up for human trials. This will be a mainstream technology within a decade. I think the golden era of 3D printing is going to start around 2020,” he said.
“I mentioned that by the 2030s we’ll have devices the size of blood cells that can go inside the bloodstream. Precursors of those exist today but we don’t really have effective devices that have computation, sensors, actuators, storage, or sophisticated nanorobots the size of blood cells,” said Kurzweil. That will be a 2030s phenomena.
There will be three applications for nanorobots. The first is to augment our immune system. Intelligent devices are already inside our bloodstream to keep us healthy. They’re our natural T-cells, but they evolved tens of thousands of years ago when conditions were different. It was not in the interest of the human species for us to live very long. Human life expectancy was 19 years. After 25, you had raised your kids and you’re just using up the limited food and resources of the tribe. Even by 1800, human life expectancy was just 37.
The immune system was not selected for long life. It doesn’t work against cancer. It doesn’t work on retroviruses and has all kinds of limitations. “We can finish that job by creating nanorobots, there have been very detailed analysis of these, that will be able to go against every pathogen, cancer cells, cancer stem cells, viruses, prions, bacteria, amoeba, all kinds of pathogens. You’ll merely download new software from the internet when a new pathogen emerges,” said Kurzweil.
It will also be possible to deal with metabolic device disorders. Think about what our body organs do, aside from the heart and the brain. They either put things in the bloodstream or take things out of the bloodstream. Lungs put in oxygen, take out carbon dioxide. The kidneys take out toxins. The entire digestive tract puts in nutrients. A lot of disorders have to do with those levels going awry, such as not enough insulin, too much glucose, and so on. Nanorobots will monitor the bloodstream for all of these substances, and put in or take out substances to augment or even replace the function of the organs. There’s a scenario for almost every disease and aging process. It won’t come all at once but it will be a powerful new tool to finish the job from we don’t get done with biotechnology. That’s what’ll happen in the 2030s.
Another application is virtual and augmented reality from within the nervous system. We’re about to emerge on a revolution in virtual and augmented reality with devices we put over our eyes. “I’ve seen some devices you put on your hands for tactile virtual reality, so you can actually hug someone from around the world. This will be done from within the nervous system to really provide highly realistic full-immersion virtual and augmented reality from within the nervous system.”
The most important application will be to directly connect our brain to the cloud. “We’ve already done that in some limited cases, such as Parkinson’s disease. Once we have devices the size of blood cells, we can go inside the brain through the capillaries non-invasively. This will be a mainstream technology,” he added.
A connection to cloud computers
Our brain is already connected to the cloud indirectly. “I’ve got to use my fingers, my eyes, and ears for this device, but it really is a brain extender. A kid in Africa can access all of human knowledge with a few keystrokes. They’re making us more productive and more intelligent. This will go directly to our brain in the 2030s. It won’t just be a direct connection to search engines in the cloud. We’ll literally expand the scope and capability of our brain,” he said.
Kurzweil said he finally took his first job three years ago, and it’s been great implementing the idea of creating synthetic neocortex in the cloud. “At first we’ll access it in the old-fashioned way through more intelligent search engines and so on, being able to talk things over with a personal assistant, but ultimately this will be a direct connection in the cloud.”
These neuro modules have been counted. “We have 300 million of them in our neocortex, which is the outer layer of the brain. That’s where we do our thinking. It looks like this (showing a slide). It’s very convoluted and has all these curvatures basically to increase its surface area. They’re really all pretty much the same. Our ancestors added additional neocortex two-million years ago when we became humanoids and developed these large foreheads, basically a larger enclosure to include more neocortex.
“As we go up the hierarchy of the neocortex, with addition, issues and properties and features get more abstract. The bottom of the hierarchy lets me recognize, for example, that the stage is straight. At the top of the hierarchy, one recognizes, ‘That’s funny. That’s ironic. She’s pretty.’ You might think those modules are more sophisticated. It’s actually the hierarchy below them that’s more sophisticated.”
The addition of the neocortex two million years ago to the top of the hierarchy let our ancestors invent language, which was the first human invention. And later, art, science, and medical devices. No other species does any of those things.
“What’s more, we’re going to do it again by connecting the top of our neocortical hierarchy to the cloud. We will add a sophisticated synthetic neocortex in the cloud. I believe we understand how these modules work.” For predictions, Kurzweil suggested that we’ll create more beautiful music and create a language that will have more insight into science, it will be funnier, and so on.
This is what one of the 300 million modules looks like, Kurzweil said motioning to a slide. “They’re organized in a hierarchy. This is a simple example. At the lower level, it might recognize there’s the crossbar to a capital A, at the next higher level there’s a capital A, at the next higher level there’s the word ‘Apple.’ Five levels higher you’ve got a module where the hierarchy stretches into the different senses. That module may get a signal that we’ve seen a pattern in fabric, heard a certain voice quality, smelled a certain perfume, concludes that my wife has entered the room. Go up another ten levels and you have modules that recognize humor, realize, ‘Oh, that was ironic,’ and so on.”
While this girl (in a slide) was having brain surgery, she was talking to the surgeons. They wanted to talk to her. You can do that because there are no pain receptors in the brain. They wanted to get her reaction to different things. When stimulating particular location, she would start to laugh. The surgeons thought they were triggering some kind of laugh reflex, but they quickly realized that they were triggering the actual perception of humor. She just found everything hilarious whenever they stimulated these points. “You guys are so funny just standing there,” was a typical comment. They weren’t funny. Not while doing surgery. But they had found some points where the neocortex detected humor.
A computer gets the joke
We’re applying this now to language. “This example comes from IBM. Watson (IBM’s more advanced version of Apple’s Siri) recently won the Jeopardy televised tournament against the best two players in the world. Watson got this query correct in the rhyme category: The answer: “A long, tiresome speech delivered by a frothy pie topping.” Watson quickly responded with the questions: “What is a meringue harangue?” That’s pretty good. The two human contestants didn’t get that. In fact, Watson got a higher score than the two of them combined. What’s not widely appreciated was Watson got his knowledge not by being programmed by the engineers but actually by having read Wikipedia and other encyclopedias: 200 hundred million pages of natural-language documents.
Watson actually doesn’t read them as well as you would. “It might read one page and conclude, ‘There’s a 56% chance that Barack Obama is the President of the United States.’ If you read the same page and if you didn’t happen to know who was president in the first place, you could conclude there’s a 98% chance. You would do a better job reading that page than Watson,” said Kurzweil. So how is that Watson does such a good job, better than the best human players in the world at this language and knowledge game?
It makes up for its weak reading by reading more pages. It has read two hundred million pages and probably a hundred thousand pages having to do with whether or not Obama is president. If you combine all these probabilities using good Bayesian reasoning and conclude that overall there’s a 99.9% that he’s president. It can do all of these inferences in the three-second Jeopardy time limit.