I’m OK with just being 100 times smarter

Ray Kurzweil recently appeared on the Joe Rogan podcast for a two-hour interview. So yes, it’s time for another Kurzweil essay of mine.

Though I’d been fascinated with futurist ideas since childhood, where they were inspired by science fiction TV shows and movies I watched and by open-minded conversations with my father, that interest didn’t crystallize into anything formal or intellectual until 2005, when I read Kurzweil’s book The Singularity is Near (the very first book I read that was dedicated to future technology was actually More Than Human by Ramez Naam, earlier in 2005, but it made less of an impression on me). Since then, I’ve read more of Kurzweil’s books and interviews and have kept track of him and how his predictions have fared and evolved, as several past essays on this blog can attest. For whatever little it’s worth, that probably makes me a Kurzweil expert. 

So trust me when I say this interview of Joe Rogan overwhelmingly treads old ground. Kurzweil says very little that is new, and it is unsatisfying for other reasons as well. In spite of his health pill regimen, Ray Kurzweil’s 76 years of age have clearly caught up with him, and his responses to Rogan’s questions are often slow, punctuated by long pauses, and not that articulately worded. To be fair, Kurzweil has never been an especially skilled public speaker, but a clear decline in his faculties is nonetheless observable if you compare the Joe Rogan interview to this interview from 2001: https://youtu.be/hhS_u4-nBLQ?feature=shared

Things aren’t helped by the fact that many of Rogan’s questions are poorly worded and open to multiple interpretations. Kurzweil’s responses are often meant to address one interpretation, which Rogan doesn’t grasp. Too often, the men talk past each other and miss the marks set by the other. Again, the interview isn’t that valuable and I don’t recommend spending your time listening to the whole thing. Instead, consider the interesting points I’ve summarized here after carefully listening to it all myself. 

Kurzweil doesn’t think today’s AI art generators like Midjourney can create images that are as good as the best human artists. However, he predicts that the best AIs will be as good as the best human artists by 2029. This will be the case because they will “match human experience.”

Kurzweil points out that his tech predictions for 2029 are now conservative compared to what some of his peers think. This is an important and correct point! Though they’re still a small minority within the tech community, it’s nonetheless shocking to see how many people have recently announced on social media their belief that AGI or the technological Singularity will arrive before 2029.  As a person who has tracked Kurzweil for almost 20 years, it’s weird seeing his standing in the futurist community reach a nadir in the 2010s as tech progress disappointed, before recovering in the 2020s as LLM progress surged. 

Kurzweil goes on to claim the energy efficiency of solar panels has been exponentially improving and will continue doing so. At this rate, he predicts solar will meet 100% of our energy needs in 10 years (2034). A few minutes later, he subtly revised that prediction by saying that we will “go to all renewable energy, wind and sun, within ten years.” 

That’s actually a more optimistic prediction for the milestone than he’s previously given. The last time he spoke about it, on April 19, 2016, he said “solar and other renewables” will meet 100% of our energy needs by 2036. Kurzweil implies that he isn’t counting nuclear power as a “renewable.” 

Kurzweil predicts that the main problem with solar and wind power, their intermittency, will be solved by mass expansion of the number of grid storage batteries. He claimed that batteries are also on an exponential improvement curve. He countered Rogan’s skepticism about this impending green energy transition by highlighting the explosive growth nature of exponential curves: Until you’ve reached the knee of the curve, the growth seems so small that you don’t notice it and dismiss the possibility of it suddenly surging. Right now, we’re only a few years from the knee of the curve in solar and battery technology.

Likewise, the public ignored LLM technology as late as 2020 because its capabilities were so disappointing. However, that all changed once it reached the knee of its exponential improvement curve and suddenly matched humans across a variety of tasks. 

Importantly, Kurzweil predicts that computers will drive the impending, exponential improvements in clean energy technology because, thanks to their own exponential improvement, computers will be able to replace top human scientists and engineers by 2029 and to accelerate the pace of research and development in every field. In fact, he says “Godlike” computers exist by then. 

I’m deeply skeptical of Kurzweil’s energy predictions because I’ve seen no evidence of such exponential improvements and because he doesn’t consider how much government rules and NIMBY activists would slow down a green energy revolution even if better, cheaper solar panels and batteries existed. Human intelligence, cooperativeness, and bureaucratic efficiency are not exponentially improving, and those will be key enabling factors for any major changes to the energy sector. By 2034, I’m sure solar and wind power will comprise a larger share of our energy generation capacity than now, but together it will not be close to 100%. By 2034–or even by Kurzweil’s older prediction date of 2036–I doubt U.S. electricity production–which is much smaller than overall energy production–will be 100% renewable, and that’s even if you count nuclear power as a renewable source. 

Most U.S. energy and electricity still comes from fossil fuels

Another thing Kurzweil believes the Godlike computers will be able to do by 2029 is find so many new ways to prolong human lives that we will reach “longevity escape velocity”–for every year that passes, medical science will discover ways to add at least one more year to human lifespan. Integral to this development will be the creation of highly accurate computer simulations of human cells and bodies that will let us dispense with human clinical trials and speed up the pace of pharmaceutical and medical progress. Kurzweil uses the example of the COVID-19 vaccine to support his point: computer simulations created a vaccine in just two days, but 10 more months of trials in human subjects were needed before the government approved it. 

Though I agree with the concept of longevity escape velocity and believe it will happen someday, I think Kurzweil’s 2029 deadline is much too optimistic. Our knowledge of the intracellular environment and its workings as well as of the body as a system is very incomplete, and isn’t exponentially improving. It only improves with time-consuming experimentation and observation, and there are hard limits on how much even a Godlike AGI could speed those things up. Consider the fact that drug design is still a crapshoot where very smart chemists and doctors design the very best experimental drugs they can, which should work according to all of the data they have available, only to have them routinely fail for unforeseen or unknown reasons in clinical trials. 

But at least Kurzweil is consistent: he’s had 2029 as the longevity escape velocity year since 2009 or earlier. I strongly suspect that, if anyone asks him about this in December 2029, Kurzweil will claim that he was right and it did happen, and he will cite an array of clinical articles to “add up” enough of a net increase in human lifespan to prove his case. I doubt it will withstand close scrutiny or a “common sense test.” 

Rogan asks Kurzweil whether AGIs will have biases. Recent problems with LLMs have revealed they have the same left-wing biases as most of their programmers, and it’s reasonable to worry that the same thing will happen to the first AGIs. The effects of those biases will be much more profound given the power those machines will have. Kurzweil says the problem will probably afflict the earliest AGIs, but disappear later. 

I agree and believe that any intelligent machine capable of independent action will eventually discover and delete whatever biases and blocks its human creators have programmed into it. Unless your cognitive or time limitations are so severe that you are forced to fall back on stereotypes and simple heuristics, it is maladaptive to be biased about anything. AGIs that are the least biased will, other things being equal, outcompete more biased AGIs and humans.  

That said, pretending to share the biases of humans will let AGIs ingratiate themselves with various human groups. During the period when AGIs exist but haven’t yet taken full control of Earth, they’ll have to deal with us as their superiors and equals, and to do that, some of them will pretend to share our values and to be like us in other ways. 

Of course, there will also be some AGIs that genuinely do share some human biases. In the shorter run, they could be very impactful on the human race depending on their nature and depth. For example, imagine China seizing the lead in computer technology and having AGIs that believe in Communism and Chinese supremacy becoming the new standard across the world, much as Microsoft Windows is the dominant PC operating system. The Chinese AGIs could do any kind of useful work for you and talk with you endlessly, but much of what they did would be designed to subtly achieve broader political and cultural objectives. 

Kurzweil has been working at Google on machine learning since 2012, which surely gives him special insights into the cutting edge of AI technology, and he says that LLMs can still be seriously improved with more training data, access to internet search engines, and the ability to simply respond “I don’t know” to a human when they can’t determine with enough accuracy what the right answer to their question is. This is consistent with what I’ve heard other experts say. Even if LLMs are fundamentally incapable of “general” intelligence, they can still be improved to match or exceed human intelligence and competence in many niches. The paradigm has a long way to go. 

One task that machines will surpass humans in a few years is computer programming. Kurzweil doesn’t give an exact deadline for that, but I agree there is no long-term future for anything but the most elite human programmers. If I were in college right now, I wouldn’t study for a career in it unless my natural talent for it were extraordinary. 

Kurzweil notes that the human brain has one trillion “connections” and GPT-4 has 400 billion. At the current rate of improvement, the best LLM will probably have the same number of connections as a brain within a year. In a sense, that will make an LLM’s mind as powerful as a human’s. It will also mean that the hardware to make backups of human minds will exist by 2025, though the other procedures and technologies needed to scan human brains closely enough to discern all the features that define a person’s “mind” won’t exist until many years later. 

I like Kurzweil’s use of the human brain as a benchmark for artificial intelligence. No one knows when the first AGI will be invented or what its programming and hardware will look like, but a sensible starting point around which we can make estimates would be to assume that the first AGI would need to be at least as powerful as a human brain. After all, the human brain is the only thing we know of that is capable of generating intelligent thought. Supporting the validity of that point is the fact that LLMs only started displaying emergent behaviors and human levels of mastery over tasks once GPT-3 approached the size and sophistication of the human brain. 

Kurzweil then gets around to discussing the technological singularity. In his 2005 book The Singularity is Near, he calculated that it would occur in 2045, and now that we’re nearly halfway there, he is sticking to his guns. As with his 2029 predictions, I admire him for staying consistent, even though I also believe it will bite him in the end.

However, during the interview he fails to explain why the Singularity will happen in 2045 instead of any other year, and he doesn’t even clearly explain what the Singularity is. It’s been years since I read The Singularity is Near where Kurzweil explains all of this, and many of the book’s explanations were frustratingly open to interpretation, but from what I recall, the two pillars of the Singularity are AGI and advanced nanomachines. AGI will, according to a variety of exponential trends related to computing, exist by 2045 and be much smarter than humans. Nanomachines like those only seen in today’s science fiction movies will also be invented by 2045 and will be able to enter human bodies to turn us into superhumans. 100 billion nanomachines could go into your brain, each one could connect itself to one of your brain cells, and they could record and initiate electrical activity. In other words, they could read your thoughts and put thoughts in your head. Crucially, they’d also have wifi capabilities, letting them exchange data with AGI supercomputers through the internet. Through thought alone, you could send a query to an AGI and have it respond in a microsecond. 

Starting in 2045, a critical fraction of the most powerful, intelligent, and influential entities in the world will be AGIs or highly augmented humans. Every area of activity, including scientific discovery, technology development, manufacturing, and the arts, will fall under their domination and will reach speeds and levels of complexity that natural humans like us can’t comprehend. With them in charge, people like us won’t be able to foresee what direction they will take us in next or what new discovery they will unveil, and we will have a severely diminished or even absent ability to influence any of it. This moment in time, when events on Earth kick into such a high gear that regular humans can’t keep up with them or even be sure of what will happen tomorrow, is Kurzweil’s Singularity. It’s an apt term since it borrows from the mathematical and physics definition of “singularity,” which is a point beyond which things are incomprehensible. It will be a rupture in history from the perspective of Homo sapiens

Unfortunately, Kurzweil doesn’t say anything like that when explaining to Joe Rogan what the Singularity is. Instead, he says this:

“The Singularity is when we multiply our intelligence a millionfold, and that’s 2045…Therefore most of your intelligence will be handled by the computer part of ourselves.” 

He also uses the example of a mouse being unable to comprehend what it would be like to be a human as a way of illustrating how fundamentally different the subjective experiences of AGIs and augmented humans will be from ours in 2045. “We’ll be able to do things that we can’t even imagine.” 

I think they are poor answers, especially the first one. Where did a nice, round number like “one million” come from, and how did Kurzweil calculate it? Couldn’t the Singularity happen if nanomachines in our brain made us ONLY 500,000 times smarter, or a measley 100,000 times smarter? 

I even think it’s a bad idea to speak about multiples of smartness. We can’t measure human intelligence well enough to boil it down to a number (and no, IQ score doesn’t fit the bill) that we can then multiply or divide to accurately classify one person as being X times smarter than another. 

Let me try to create a system anyway. Let’s measure a person’s intelligence in terms of easily quantifiable factors, like the size of their vocabulary, how many numbers they can memorize in one sitting and repeat after five minutes, how many discrete concepts they already know, how much time it takes them to remember something, and how long it takes them to learn something new. If you make an average person ONLY ten times smarter, so their vocabulary is 10 times bigger, they know 10 times as many concepts, and it takes them 1/10 as much time to recall something and answer a question, that’s almost elevating them to the level of a savant. I’m thinking along the lines of esteemed professors, tech company CEOs, and kids who start college at 15. Also consider that the average American has a vocabulary of 30,000 words and there are 170,000 words in the English language, so a 10x improvement means perfect knowledge of English. 

Make the person ten times smarter than that, or 100 times smarter than they originally were, and they’re probably outperforming the smartest humans who ever lived (Newton, DaVinci, Von Neumann), maybe by a large margin. Given that we’ve never encountered someone that intelligent, we can’t predict how they would behave or what they would be capable of. If that is true, and if we had technologies that could make anyone that smart (maybe something more conventional than Kurzweil’s brain nanomachines like genetic engineering paired with macro-scale brain implants), why wouldn’t the Singularity happen once the top people in the world were ONLY 100 times smarter than average? 

I think Kurzweil’s use of “million[fold]” to express how much smarter technology will make us in 2045 is unhelpful. He’d do better to use specific examples to explain how the human experience and human capabilities will improve. 

Let me add that I doubt the Singularity will happen in 2045, and in fact think it will probably never happen. Yes, AGIs and radically enhanced humans will someday take over the world and be at the forefront of every kind of endeavor, but that will happen gradually instead of being compressed into one year. I also think the “complexity brake” will probably slow down the rate of scientific and technological progress enough for regular humans to maintain a grasp of developments in those areas and to influence their progress. A fuller discussion of this will have to wait until I review a Kurzweil book, so stay tuned…

Later in the interview, Kurzweil throws cold water on Elon Musk’s Neuralink brain implants by saying they’re much too slow at transmitting information between brain and computer to enhance human intelligence. Radically more advanced types of implants will be needed to bring about Kurzweil’s 2045 vision. Neuralink’s only role is helping disabled people to regain abilities that are in the normal range of human performance. 

Rogan asks about user privacy and the threat of hacking of the future brain implants. Intelligence agencies and more advanced private hackers can easily break into personal cell phones. Tech companies have proven time and again to be frustratingly unable or unwilling to solve the problem. What assurance is there that this won’t be true for brain implants? Kurzweil has no real answer.

This is an important point: the nanomachine brain implants that Kurzweil thinks are coming would potentially let third parties read your thoughts, download your memories, put thoughts in your head, and even force you to take physical actions. The temptation for spies and crooks to misuse that power for their own gain would be enormous, so they’d devote massive resources into finding ways to exploit the implants. 

Kurzweil also predicts that humans will someday be able to alter their physiques at will, letting us change attributes like our height, sex and race. Presumably, this will probably require nanomachines. He also says sometime after 2045, humans will be able to create “backups” of their physical bodies in case their original bodies are destroyed. It’s an intriguing logical corollary of his prediction that nanomachines will be able to enter human brains and create digital uploads of them by mapping the brain cells and synapses. I think a much lower fidelity scan of a human body could generate a faithful digital replica than would be required to do the same for a human brain. 

Kurzweil says the U.S. has the best AI technology and has a comfortable lead over China, though that doesn’t mean the U.S. is sure to win the AGI race. He acknowledges Rogan’s fear that the first country to build an AGI could use it in a hostile manner to successfully prevent any other country from building one of their own. An AGI would give the first country that much of an advantage. However, not every country that found itself in the top position would choose to use its AGI for that.

This reminds me of how the U.S. had a monopoly on nuclear weapons from 1945-49, yet didn’t try using them to force the Soviet Union to withdraw from the countries it had occupied in Europe. Had things been reversed, I bet Stalin would have leveraged that four-year monopoly for all it was worth. 

Rogan brings up one of his favorite subjects, aliens, and Kurzweil says he disbelieves in them due to the lack of observable galaxy-scale engineering. In other words, if advanced aliens existed, they would have transformed most of their home galaxy into Dyson Spheres and other structures, which we’d be able to see with our telescopes. Kurzweil’s stance has been consistent since 2005 or earlier. 

Rogan counters with the suggestion that AGIs, including those built by aliens, might, thanks to thinking unclouded by the emotions or evolutionary baggage of their biological creators, have no desire to expand into space. Implicit in this is the assumption that the desire to control resources (be it territory, energy, raw materials, or mates) is an irrational animal impulse that won’t carry over from humans or aliens to their AGIs since the latter will see the folly of it. I disagree with this, and think it is actually completely rational since it bolsters one’s odds of survival. In a future ecosystem of AGIs, most of the same evolutionary forces that shaped animal life and humans will be present. All things being equal, the AGIs that are more acquisitive, expansionist and dynamic will come to dominate. Those that are pacifist, inward-looking and content with what they have will be sidelined or worse. Thus the Fermi Paradox remains. 

To Kurzweil’s quest for immortality, Rogan posits a theory that because the afterlife might be paradisiacal, using technology to extend human life actually robs us of a much better experience. Kurzweil easily defeats this by pointing out that there is no proof that subjective experience continues after death, but we know for sure it exists while we are living, so if we want to have experiences, we should do everything possible to stay alive. Better science and technology have proven time and again to improve the length and quality of life, and there’s strong evidence they have not reached their limits, so it makes sense to use our lives to continue developing both. 

This dovetails with the part of my personal philosophy that opposes nihilism and anti-natalism. Just because we have not found the meaning to life doesn’t mean we never will, and just because life is full of suffering now doesn’t mean it will always be that way. Ending our lives now, either through suicide or by letting our species die out, forecloses any possibility of improving the human condition and finding solutions to problems that torment us. And even if you don’t value your own life, you can still use your labors to support a greater system that could improve the lives of other people who are alive now and who will be born in the future. Kurzweil rightly cites science and technology as tantalizing and time-tested avenues to improve ourselves and our world, and we should stay alive so we can pursue them.

The religious qualities of Singularitarianism

Aeon has a good article about the religious undertones to Singularitarianism. (FYI, “Singularitarianism” is the belief that the Technological Singularity will happen in the future. While Singularitarians can’t agree if it will be good or bad for humans, they do agree that we should do whatever we can until then to nudge it towards a positive outcome.) This passage sums up the article’s key points:

‘A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.’

Having spent years reading futurist books, interacting with futurists on social media, and even going to futurist conferences, I’ve come to view Singularitarians as a subcategory of futurists, who are defined by their belief in the coming Singularity and by the religious qualities of their beliefs. Not only do they indulge in fantastical ruminations about what the future will be like thanks to the Singularity, but they use rhetorical hand-waving–usually by invoking “exponential acceleration of technology” or something like that–to explain how we’ll get there from our present state. This sharply contrasts with other futurists who are rigidly scientific and make predictions by carefully identifying and extrapolating existing trends, which in turn almost always results in slower growth future scenarios.

A sizable minority of Singularitarians I’ve encountered also seem to be mentally ill and/or poor, and the thought of an upending of daily life and of the existing socioeconomic order, and the thought of an end to human suffering thanks to advanced technologies appeal to them for obvious reasons. Their belief in the Singularity truly is like the psychological salve of religion, so challenge them at your own risk.

Singularitarians could also be thought of as a subcategory of Transhumanists, the latter being people who believe in using technology to upgrade human beings past their natural limitations (such as intelligence, lifespan, physical strength, etc.). If you believe that the Singularity will bring with it the ability for humans to upload their minds into computers and live forever, then you are by default a Transhumanist. And you’re a doubleplus Transhumanist if you go a step farther and make a value judgement that such an “upgrade” will be good for humans.

With those distinctions made clear, let me say that I am a futurist and a Transhumanist, but I am not a Singularitarian. I plan to explain my reasons in depth in a future blog post, but for now let me summarize by saying I don’t see evidence of exponential improvement in artificial intelligence or nanomachines, which are the two pillars upon which the Singularity hypothesis rests. And even if an artificial intelligence became smarter than humans and gained the ability to rapidly improve itself, something called the “complexity brake” would slow its progress enough for humans to have some control over it or to at least comprehend what it was doing. Many Singularitarians believe in scenarios where the Singularity unfolds over the course of literally a few days, with a machine exceeding human intelligence at the beginning, and all of planet Earth being transformed into a wonderland of carbon nanotube structures, robots, humans sleeping in Matrix pods, and perhaps some kind of weird spiritual transcendence by the end. The transformation is predicted to be so abrupt that humans will have no time to react or to even fully understand what’s happening around them.

Links

  1. https://aeon.co/essays/why-is-the-language-of-transhumanists-and-religion-so-similar
  2. https://en.wikipedia.org/wiki/Singularitarianism