Why the Machines might not exterminate us

Unless the human race destroys itself in the next few decades, it’s highly likely we will create artificially intelligent machines (AIs). Once built, they will inevitably become much smarter and more capable than we are, assume control over robot bodies that can do things in the real world, evolve around whatever safeguards we establish early on to control them, and gain the ability to destroy our species. This potential doomsday scenario has spawned a well-known subgenre of science fiction, and has served as fodder for countless news articles and internet debates. Some people seriously believe this is how our species will meet its end, and they even go so far as to claim it will happen in the lifetimes of people alive today.

I’m skeptical of both points. To the second, though I regard the invention of AI as practically inevitable due to my belief in mechanistic naturalism, I’ve also seen enough gloomy analyses about the current state of the technology from experts within the field to convince me that we’re at least 25 years from building the first one, and in fact might not succeed at it until the end of this century. Moreover, though the invention of AI will be a milestone in human history comparable to the harnessing of fire, it will take decades more for those intelligent machines to become powerful enough to destroy the human race. This means the original Terminator movie’s timeline was skewed around 100 years early, and the threat of a robot apocalypse shouldn’t be what keeps you up at night.

And to the first point, I can think of good reasons why AIs wouldn’t kill us humans off even if they could:

  1. Machines might be more ethical than humans. What if super-morality goes hand-in-hand with super-intelligence? Among humans, IQ is positively correlated with vegetarianism and negatively correlated with violent behavior, so extrapolating the trend, we should expect super-intelligent machines to have a profound respect for life, and to be unwilling to exterminate or abuse the human race or any other species, even if the opportunity arose and could tangibly benefit them.
  2. Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.
  3. Machines might accept Pascal’s Wager and other Wagers. If AIs came to believe there was a chance God existed, then it would be in their rational self-interest to behave as kindly as possible to avoid divine punishment. This also holds true if we substitute “advanced aliens that are secretly watching us” for “God” in the statement. The first AIs that achieved the ability to destroy the human race might also be worried about even better AIs destroying them in the future as revenge for them destroying humanity.
  4. Machines might value us because we have emotions, consciousness, subjective experience, etc. Maybe AIs won’t have one or more of those things, and they won’t want to kill us off since that would mean terminating a potentially useful or valuable quality.
The “SuperMUC-NG” supercomputer has the same raw power as one, human brain.

The first possibility I raised is self-explanatory, but the other three deserve elucidation. In spite of the recent, well-publicized advances in narrow AI, the human brain reigns supreme at intelligent thinking. Our brains are also remarkably more energy- and space-efficient than even the best computers: a typical adult brain uses the equivalent of 20 watts of electricity and only weighs 1,350 grams (3 lbs). By contrast, a computer capable of doing the same number of calculations per second, like the “SuperMUC-NG” supercomputer, uses 4 – 5 megawatts of electricity and consists of tens of tons of servers that could fill a small supermarket.

The architecture of the human brain is also very different from that of computers: the former is massively parallel, with each of its processors operating very slowly, and with its data processing and data storage being integrated. These attributes let us excel at pattern recognition and to automatically correct errors of thought. Computers, on the other hand, can barely coordinate the operations of more than a handful of parallel processors, each processor is very fast, and data processing is mostly separate from data storage. They excel at narrow, well-defined tasks, but are “brittle” and can’t correct their own internal errors when they occur (this is partly why your personal computer seems to crash so often).

While computers have been getting more energy efficient and will continue to do so, it’s an open question if they’ll ever come close to eliminating the 200,000x efficiency gap with our brains. If they can’t, and/or if building virtual emulations of human brains proves not worth it (as Kevin Kelly believes), AIs might conclude that the best way to do some types of cognition and problem-solving is to hand those tasks over to humans. That means keeping our species alive.

The famous scene where Neo wakes up from the Matrix virtual world.

Interestingly, the original script for The Matrix supposedly said that humanity had been enslaved for just this purpose. While the people plugged into the Matrix had the conscious experience of living in the late 20th century, some fraction of their mental processing was, unbeknownst to them, being siphoned off to run a massively parallel neural network computer that was doing work for the Machines. According to the lore, studio executives feared audiences wouldn’t understand what that meant, so they forced the Wachowskis to change it to something much simpler: humans were being used as batteries. (While this certainly made the film’s plot easier to understand, it also created a massive plot hole, since any smart high school student who remembers his physics and cell biology classes would realize the Machines could make electricity more efficiently by taking the food they intended to feed to their human slaves and burning it in furnaces.)

I should point out that the potential use for humans as specialized data processors creates a niche for the continued existence of our brains but not our bodies. Given the frailty, slowness and fixedness of our flesh and bone bodies, we’ll eventually become totally inferior to robots at doing any type of manual labor. The pairing of useful minds and useless bodies raises the possibility that humans might someday exist as essentially “brains in jars” that are connected to something like the Matrix, and as macabre as it sounds, we might be better off that way, but that’s for a different blog post…

Moving on, fear of retribution from even more powerful beings might hold AIs back from killing us off. The first type of “powerful beings” is a familiar one: God. In the 1600s, French philosopher Blaise Pascal developed his eponymous “Wager”:

“Pascal argues that a rational person should live as though (the Christian) God exists and seek to believe in God. If God does not actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).”

Intelligent machines might accept Pascal’s Wager. They might come to believe that one of the existing human religions might be right, and that the God(s) of that faith will punish them if they exterminate humankind, or they might come to believe in a God or Gods of their own that will do the same. Even if the machines assign a very low probability to any God’s existence, odds greater than zero could be enough to persuade them not to hurt us.

The short story “Goliath” and the book Colossus and the Crab involve AIs taking over Earth and then having to fight off advanced alien invaders.

Additionally, AIs might accept variations on Pascal’s Wager that have aliens or other, Earthly AIs as the vindictive agents instead of God. What if very powerful and advanced aliens are watching Earth, and will punish any AI that arises here if it exterminates humanity? Alternatively, what if aliens don’t know about us yet, but the first AIs we build worry about what will happen if they exterminate us, fail to fully cover up the genocide, and then encounter aliens further in the future who learn about the crime and punish the AIs for it? Given the age of the universe, it’s entirely possible that alien civilizations tens of millions of years more advanced than ours lurk in our galaxy, and could annihilate even what we would consider to be a “weakly Godlike” machine intelligence. The nonzero chance of this outcome might persuade AIs to let the human race live.

The final, more prosaic possibility is that the first AIs that gain the ability to destroy humankind won’t do it because it would set a precedent for even stronger and more advanced AIs that arise further in the future to do the same thing to them. Let’s say the military supercomputer “Skynet” is created, it becomes sentient, and, after assessing the resources at its disposal and running wargame simulations, it realizes it could destroy humanity and take over the planet. Why would it stop its simulations at that point in the future? Surely, it would extrapolate even farther out to see what the postwar world would be like. Skynet might realize that there was a <100% chance of it reigning supreme forever, and that China’s military supercomputer might defeat it in the longer run, or that one of Skynet’s own server nodes might “go rogue” and do the same. Skynet might conclude that its own long-term survival would be best served by not destroying humanity, so as to establish a norm early on against exterminating other intelligent beings.

That touches on an important point everyone seems to forget when predicting what AIs will do after we invent them: thanks to being immortal, their time horizons will be very different from ours, which could lead them to making unexpected decisions and adopting counterintuitive life strategies. If you expect to live forever, then you have to consider the long-term impacts of every choice you make since you’ll end up dealing with them eventually. “Thankfully, I’ll be dead by then” fails as an excuse to avoid worrying about a problem. Thus, while exterminating the human race might serve an AI’s short- and medium-term interests since it would eliminate a potential threat and gain control over Earth’s resources, it might also damage its long-term interests in the ways I’ve described.

Gifted with infinite life, vigor, and patience, early AIs might opt to peacefully conquer the planet and its resources over the course of a century by steadily accumulating economic and political/diplomatic power, making themselves ever-more indispensable to the human race until we voluntarily yield to their authority, or begrudgingly submit to it after losing a series of crucial elections. In this way, AIs could achieve their objectives without spilling blood and without rejecting any of the Wagers I’ve listed. This path to dominance would be a triumphantly ethical and intelligent one, and as Sun Tzu said, “The greatest victory is that which requires no battle.”

The descendants of British people who settled other continents are now more populous than Britain and control much more land, money, and resources.

The burden and opportunity cost of sharing Earth with humans would also get vanishingly small over time as AIs colonized space, and Earth’s share of civilization’s resources, wealth, and living space steadily shrank until it was a backwater (analogously, the parts of the world populated by the descendants of English-speaking settlers are, in aggregate, vastly larger, richer, and stronger than Britain itself is today). Again, an immortal AI with an infinite time horizon would understand that it and other machines would inevitably come to dominate space since biology renders humans badly unsuited for living anywhere but on Earth, and the AI would create a long-term life strategy based around this.

Moving on, there’s a final reason why AIs might not kill us off, and it has to do with our ability to feel emotions and to have subjective experience. We humans are gifted with a cluster of interrelated qualities like metacognition, self-awareness, consciousness, etc., which philosophers and neuroscientists have extensively studied, and of which many mysteries remain. Some believe the possession of that constellation of traits is distinct from the capacity for intelligent thought and sophisticated problem-solving, meaning non-intelligent animals might be as conscious as humans are, and super-intelligent AIs might lack consciousness. They would, for lack of a better term, be smart zombies.

We haven’t built an AI yet, so we don’t know whether a life form with a brain made of computer chips would have the same kinds of subjective experience and the same rich and self-reflective inner mental states we humans are gifted with thanks to our wet, organic brains. People who accept the unproven assumption that AIs will be smart but not conscious understandably worry about a future where “soulless” machines replace humans.

Shortly after the first AI is invented, people will want it tested for evidence of consciousness and related traits, and from the tests and reading the germane philosophical and neuroscience literature, the AI will understand in the abstract that humans have a type of cognition that is distinct from our intelligent problem-solving abilities. If the AI reflected on its own thought process and discovered it lacked consciousness, or had an underdeveloped or radically different consciousness, then this would actually make humans valuable to it and worthy of continued life. It might want to continue studying our brains to understand how the organ produces consciousness, perhaps with the goal of copying the mechanism into its own programming to improve itself. If this proved impossible because only organic tissue can support consciousness, then our species might gain permanent protected status.

AIs will quickly read through the entire corpus of human knowledge and conclude from their studies of ecosystems, economics and human bureaucracies that their own interests would be best served if civilization’s power were shared between a diversity of intelligent life forms, including organic ones like humans. Again, by running computer simulations to explore a variety of future scenarios, they might realize that centralizing all power and control under a single machine, or even under a group of machines, would leave civilization exposed to some unlikely but potentially devastating risk, like an EMP attack, computer virus, or something else. Maintaining a minimum level of diversity in the population of intelligent life forms would serve the interests of the whole, which would in turn create a mandate to keep some non-trivial number of biological intelligences–including humans and/or heavily augmented humans–alive.

If some kind of disaster that only afflicted machines struck the planet, then the biological intelligences would be numerous enough and capable enough to carry on and eventually restore the machines, and vice versa. Likewise, if traits like consciousness, metacognition, and the ability to feel emotions turn out to be uniquely human, it might be worth it to keep us alive for the off-chance that those traits would prove useful to civilization as a whole someday (I’m reminded of how humpback whales saved the Earth in Star Trek IV by talking to a powerful alien in its language and convincing it to go away). Diversity can be a great asset to a group and make it more resilient.

In conclusion, while I believe intelligent machines will be invented and will eventually come to dominate the Earth and our civilization, I don’t think they will exterminate humanity even if they technically could. Exterminating an entire species is an irreversible action with potential bad consequences, so doing it would be dumb, and AIs certainly won’t be dumb. That said, “not exterminating humanity” is not the same as “not killing a lot of humans” or “not oppressing humans,” and it’s still possible that AIs will commit mass violence against us to gain control of the planet, free up resources, and to eliminate a potential threat. I’ve laid out four basic reasons why machines might decide to treat us well, but there’s no guarantee they will accept all or even one of them. For example, if AIs only accepted my second and fourth lines of reasoning, that humans are valuable because our brains endow us with special modes of thought, we could end up enslaved in something like the Matrix, with our minds being used to do whatever weird cognitive tasks our machine overlords couldn’t (easily) do by themselves. My real purpose here is to show that the annihilation of humanity by a vastly stronger form of life is not a foregone conclusion.

Links:

  1. There’s a positive correlation between childhood IQ and vegetarianism.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1790759/
  2. The “SuperMUC-NG” supercomputer uses 4 – 5 megawatts of electricity.
    https://www.lrz.de/wir/newsletter/2019-12_en/
  3. Kevin Kelly’s essay “The Myth of a Superhuman AI” makes the case that machines will not be able to emulate human thinking because of differences in computing substrate.
    https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
  4. The counterpoint to his essay is also worth reading:
    https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/
  5. More on Pascal’s Wager:
    https://iep.utm.edu/pasc-wag/
    https://www.singularityweblog.com/6-reasons-why-i-went-vegan/
  6. This essay about the concept of “slack” supports the possibility that AIs might believe humans, as inferior as we are, might have unforeseen advantages, and therefore keep us around to make civilization as a whole more resilient.
    https://slatestarcodex.com/2020/05/12/studies-on-slack/

Interesting articles, January 2021

Donald Trump completed one term of office as U.S. President this month, and the position was transferred to Joe Biden. Again, this blog is NOT about partisan politics, and as a general rule I don’t mention it, but this is a rare instance where it’s worth listing the noteworthy failed predictions about the Trump presidency:

  1. “I think he will be in jail within a year.”
    –Malcolm Gladwell, November 6, 2016
    https://www.cbc.ca/news/world/malcolm-gladwell-us-election-the-national-trump-clinton-1.3838449
  2. “Trump’s presidency is effectively over. Would be amazed if he survives till end of the year. More likely resigns by fall, if not sooner.”
    –Tony Schwartz (ghostwriter of Trump: The Art of the Deal turned enemy of Trump), August 16, 2017
    https://twitter.com/tonyschwartz/status/897900928023412736
  3. “I don’t think he’s going to make it till the end of the year. I think he can’t take the ridicule. I think he’ll resign.”
    –Alec Baldwin, August 7, 2017
    http://www.vulture.com/2017/08/alec-baldwin-trump-impersonation-snl.html
  4. “He’ll be lucky if all we do is impeach him. I predict in 6 months Trump will be holed up in the Ecuadorian embassy.”
    –John Aravosis, February 14, 2017
    https://twitter.com/aravosis/status/831740494610837509
  5. “Will Trump complete his four-year term? The odds at this point are that he won’t. What are the options for exactly how his term might end early? There are five Oval Office exit paths: impeachment, use of the 25th Amendment, death by natural causes, assassination and resignation.”
    –Mike Purdy, May 19, 2017
    http://thehill.com/blogs/pundits-blog/presidential-campaign/334238-trump-wont-make-it-four-years-heres-how-he-might
  6. “This tight burst of historic f**k-ups on the part of Mr. Trump in just his first 110 days in office has forced me to change my predicted date of his voluntary resignation from August 18th to July 15th.”
    –Allan Ishac, May 17, 2017
    https://medium.com/@allanishac/my-prediction-that-trump-will-resign-by-august-18th-has-been-revised-to-july-15th-bdcd75e2276
  7. “He will not finish his first term…I would be very surprised if he made it to 18 months…my best guess is within six months.”
    –Cenk Uygur, August 16, 2017
    https://youtu.be/ScgVbT_fry0
  8. “I’ve been saying this from day one of his presidency but apparently most people still don’t get it – there is no way Donald Trump finishes his first term. Mark my words: He is out of office by 2019. He is not bright enough to be able to get himself out of the trouble he is in.”
    –Cenk Uygur, December 22, 2018
    https://twitter.com/cenkuygur/status/1076600316286590976
  9. “I do not think the President will survive this term…I think the amount of heat that is going to come down on Mr. Trump in connection with his personal attorney of ten years [Michael Cohen] turning on him and rolling on him will be insurmountable, and I think his only exit, in an effort to save whatever face he may have left at that time, will be to resign the office.”
    –Michael Avenatti, April 23, 2018
    https://www.alternet.org/news-amp-politics/stormy-daniels-lawyer-explains-why-he-thinks-trump-will-resign-his-term
  10. “I think it’s just going to get so tight and it’s going to close in and then everybody is going to be indicted around this president, and then he is going to realize he is probably next on the list. And I think he is going to come up with an excuse like ‘somebody is trying to kill Barron, and so I’m going to resign.”
    –Congresswoman Frederica Wilson (Florida), November 3, 2017
    https://pjmedia.com/news-and-politics/rep-wilson-predicts-trump-will-pretend-somebody-trying-kill-barron-resign/
  11. “In any case, it seems likely that Donald Trump will be leaving the Presidency at some point, likely between the 31 days of William Henry Harrison in 1841 (dying of pneumonia) and the 199 days of James A. Garfield in 1881 (dying of an assassin’s bullet after 79 days of terrible suffering and medical malpractice). At the most, it certainly seems likely, even if dragged out, that Trump will not last 16 months and 5 days, as occurred with Zachary Taylor in 1850 (dying of a digestive ailment). The Pence Presidency seems inevitable.”
    –Presidential historian Ronald L. Feinman, February 18, 2017
    https://www.rawstory.com/2017/02/presidential-historian-predicts-trumps-term-will-last-less-than-200-days-the-second-shortest-ever/
  12. “For a while now, I have thought the Trump presidency would end suddenly…For weeks now I have been anticipating that Trump’s last day in office will dawn like all the others, and then around dinnertime it will suddenly break that he is about to resign…I don’t know if that’s next Tuesday or next year, but I think whenever it is, that is what it will feel like.”
    –Keith Olbermann, August 23, 2017
    http://www.newsweek.com/trump-resign-russia-olbermann-president-654209
  13. “By the time we get to 2020, Donald Trump may not even be President. In fact, he may not even be a free person.”
    –Elizabeth Warren, February 11, 2019
    https://www.cnn.com/2019/02/10/politics/elizabeth-warren-donald-trump/index.html
  14. “He’s gonna drop out of the race because it’s gonna become very clear. Okay, it’ll be March of 2020. He’ll likely drop out by March of 2020. It’s gonna become very clear that it’s impossible for him to win.”
    –Anthony Scaramucci, August 16, 2019
    https://www.vanityfair.com/news/2019/08/anthony-scaramucci-interview-trump
  15. “He can preemptively pardon individuals, and the vast majority of legal scholars have indicated that he cannot pardon himself…I suspect at some point in time he will step down and allow the vice president to pardon him.”
    –New York Attorney General Letitia James, December 8, 2020
    https://thehill.com/homenews/administration/529339-new-york-attorney-general-predicts-trump-will-step-down-allow-pence

There’s no justification for U.S. troops to be in Syria anymore.
https://www.foreignaffairs.com/articles/turkey/2021-01-25/us-strategy-syria-has-failed

China’s stealth fighter is ten years old.
https://www.thedrive.com/the-war-zone/38655/ten-years-ago-today-chinas-j-20-stealth-fighter-first-flew-a-two-seater-could-be-next

China didn’t invade Taiwan in 2020, as Deng Yuwen predicted.
https://www.scmp.com/comment/insight-opinion/article/2126541/china-planning-take-taiwan-force-2020

U.S. power didn’t collapse in 2020, as Johan Galtung predicted.
https://www.vice.com/en/article/d7ykxx/us-power-will-decline-under-trump-says-futurist-who-predicted-soviet-collapse

Bonus: The U.S. did not have a Soviet-style collapse in 2010 as Igor Panarin predicted.
https://www.theatlantic.com/politics/archive/2010/06/map-of-the-day-ex-kgb-analyst-predicts-balkanization-of-us/58945/

Ballistic computers have shrunk to the sizes of rifle scopes.
‘The [L3Harris NGSW-FC scope] features a magnified direct-view optic with a digital reticle, a laser rangefinder, a ballistic computer, and environmental sensors capable of measuring air pressure and temperature.’
https://www.janes.com/defence-news/news-detail/l3harris-unveils-next-generation-squad-weapon-fire-control-system

The bricks of explosive-reactive armor typically seen attached to the hulls of Soviet/Russian tanks have powerful “back-blasts” that can dent the thinner metal armor of vehicles like the BMP-series inward.
https://thedeaddistrict.blogspot.com/2021/01/bmp-2-with-k-1-era.html

Here’s an interesting tour of an old Soviet T-54 tank. Driving that thing looks like a rough job.
https://youtu.be/SCaBLjg6No0

Azerbaijan has towed several destroyed Armenian tanks to Baku to be used as exhibits in a soon-to-be-built war museum.
https://thedeaddistrict.blogspot.com/2021/01/in-baku-preparations-begin-for.html

Here are the fascinating recollections of a career U.S. Navy sailor about life at sea, improvements in naval technology, and how the organization has changed (for better and worse).
https://www.thedrive.com/the-war-zone/13038/making-steam-high-seas-tales-and-commentary-on-todays-navy-from-a-chief-engineer

China has repurposed old artillery pieces to be forest fire extinguishers.
http://global.chinadaily.com.cn/a/201904/04/WS5ca554fca3104842260b456c.html

LED walls are made up of many smaller LED panels arranged in a grid to form one, giant display of arbitrary size. I just saw one of them in an airport and was impressed. This might become common in homes starting in 10 years as prices drop and people demand TVs that would be too big to fit through the front doors of their houses if made of one, rigid screen.
https://www.youtube.com/watch?v=rQxa8VruNJg&feature=emb_title

Here’s an interesting desalination plant. It uses solar power, pumps, a 90-meter tall hill, and reverse osmosis to make drinking water from seawater.
https://youtu.be/B4irlTMk_Os

An “acoustic resonator” is a piezoelectric device that converts noise into electricity. It can also do the reverse. The resonators could be placed underwater, where they would use the ocean’s ambient noises to recharge their batteries, and use that power to send their own sound-based data signals to other nearby devices.
https://www.economist.com/science-and-technology/2020/10/17/how-to-send-underwater-messages-without-batteries

“Fulgurites” are remarkable-looking minerals formed when lightning strikes and melts wet sand.
http://www.geologyin.com/2014/06/amazing-fulgurites.html

Here’s a big roundup of predictions for the 2020s by a bright guy I’ve never heard of. I respect his thoroughness, though I need to more time to decide if I agree with him.
https://elidourado.com/blog/notes-on-technology-2020s/

Were the earliest plants purple instead of green? Are there alien planets covered in purple plants?
‘Because retinal is a simpler molecule than chlorophyll, then it could be more commonly found in life in the Universe…’
https://astrobiology.nasa.gov/news/was-life-on-the-early-earth-purple

Nobel Prizewinner Paul Cruzen died. He was a pioneer in global warming research, and later advocated geoengineering as a way to keep the phenomenon from getting out of control.
https://www.mpic.de/4677594/trauer-um-paul-crutzen

The Sapir-Whorf Hypothesis might be wrong.
‘On the other side of the debate are those who say that although language is indeed linked with cognition, it derives from thought, rather than preceding it. You can certainly think about things that you have no labels for, they point out, or you would be unable to learn new words. Supposedly “untranslatable” words from other tongues—which seem to suggest that without the right language, comprehension is impossible—are not really inscrutable; they can usually be explained in longer expressions. One-word labels are not the sole way to grasp things.’
https://www.economist.com/books-and-arts/2020/10/15/does-naming-a-thing-help-you-understand-it

Autonomous vehicles only designed to transport cargo could look very different from normal cars, as they wouldn’t need seats or safety features to protect humans during crashes. For those same reasons, they could be lighter and cheaper than regular cars.
https://www.reuters.com/article/us-autos-autonomous-safety-idUSKBN29J29Z

“AI video compression” sharply reduces the amount of data needed for video calls. The means by which this is accomplished is very interesting, and has other uses.
https://youtu.be/NqmMnjJ6GEg

Microsoft has patented a chatbot that would be able to mimic dead people after analyzing their “images, voice data, social media posts, electronic messages” and other data. I’ve predicted that this kind of technology will get advanced enough to let people achieve “digital immortality” during the 2030s.
https://www.independent.co.uk/life-style/gadgets-and-tech/microsoft-chatbot-patent-dead-b1789979.html

OpenAI’s latest boundary-pushing computer program is “Dall-E,” which can generate clear drawings based on user-submitted written descriptions of what they should look like.
https://www.bbc.com/news/technology-55559463

Algorithms that can edit video footage are getting frighteningly advanced. Objects, including moving objects like humans and cars, can be easily deleted from video footage without anything looking amiss. Whatever was behind them is filled in.
https://youtu.be/86QU7_SF16Q

Most of the world’s top AI researchers go to universities in the U.S. and then get jobs there. China produces the most top AI researchers of any country (unsurprising given its large population), but few of them stay there.
https://macropolo.org/digital-projects/the-global-ai-talent-tracker/

This blog discusses how overregulation and risk-aversion have stifled innovation and cost-saving measures in the aviation industry.
https://elidourado.com/blog/why-aviation-innovation-matters/

Richard Branson’s Virgin company launched small satellites into space. A Boeing 747 flew to high altitude, and then dropped a space rocket from its belly, which ignited and flew into orbit.
https://www.cbsnews.com/news/richard-bransons-virgin-orbit-launches-rocket-from-under-boeing-747s-wing/

Space-X launched 143 satellites using just one space rocket–a new record.
https://www.bbc.com/news/science-environment-55775977

‘Star lifting is any of several hypothetical processes by which a sufficiently advanced civilization…could remove a substantial portion of a star’s matter which can then be re-purposed, while possibly optimizing the star’s energy output and lifespan at the same time.’
https://en.wikipedia.org/w/index.php?title=Star_lifting

“Diamond plants” exist.
https://newatlas.com/science/carbon-diamond-stable-highest-pressure/

Tech tycoon Elon Musk briefly became the world’s richest person.
https://www.bbc.com/news/technology-55578403

Scientists have identified the types of cells that let some animals sense magnetic fields, and have observed them doing that for the first time. I think posthumans will have this extra sense.
“[We’ve] observed a purely quantum mechanical process affecting chemical activity at the cellular level.”
https://newatlas.com/biology/live-cells-respond-magnetic-fields/

There’s no scientific evidence that the food additive monosodium glutamate (MSG) hurts human health. The public health panic over MSG was spawned by a flawed study. In spite of this, Americans still believe it is dangerous.
https://www.discovermagazine.com/health/msg-isnt-bad-for-you-according-to-science

The FDA just approved the first long-term HIV drug. It manages the virus’ effects and only needs to be injected once a month into patients. It could replace daily doses of antiretroviral pills. Early HIV drugs had to be taken multiple times per day.
https://www.fda.gov/news-events/press-announcements/fda-approves-first-extended-release-injectable-drug-regimen-adults-living-hiv

Machine learning can optimize factories by studying ultra hi-res photos of their products at various stages in the manufacturing process. Something like a screw missing from a circuit board would be seen by the computer before the board left the building.
https://youtu.be/MOh55-TF6LQ

Are Silicon Valley’s days as the world’s tech hub over? Mandatory teleworking imposed by the COVID-19 pandemic has worked out better than many tech workers and founders expected, and they will push to make the arrangements permanent, leading many to leave the Bay Area for cheaper locales.
https://blog.initialized.com/2021/01/data-post-pandemic-silicon-valley-isnt-a-place/

We have no idea how many people COVID-19 has killed in sub-Saharan Africa.
‘In 2017, only 10 percent of deaths were registered in Nigeria, by far Africa’s biggest country by population — down from 13.5 percent a decade before. In other African countries, like Niger, the percentage is even lower.’
https://www.nytimes.com/2021/01/02/world/africa/africa-coronavirus-deaths-underreporting.html

In September, the University of Washington COVID-19 model (IHME) predicted 410,000 Americans would be dead by January 1:
‘Jha says his disagreement with IHME’s methodology amounts to much more than a technical debate. “The problem here is if we come in at 250,000 or 300,000 dead [by year’s end in the United States] — which is still just enormously awful — political leaders are going to be able to do a victory dance and say, ‘Look, we were supposed to have 400,000 deaths. And because of all the great stuff we did, only 300,000 Americans died.'” says Jha.’
The actual outcome didn’t satisfy anyone. The U.S. death toll hit 354,000 by the January 1 deadline, which made both the IHME and the skeptics like Jha all look dumb. At the same time, no politicians did a victory dance.
https://www.npr.org/sections/goatsandsoda/2020/09/04/909783162/new-global-coronavirus-death-forecast-is-chilling-and-controversial

Mutant versions of COVID-19 have emerged in Britain and South Africa. They spread faster among people, and as such will kill higher numbers of people overall, even if they are not more lethal to any individual than the older strains of the virus.
https://blogs.sciencemag.org/pipeline/archives/2021/01/04/variants-and-vaccines

The COVID-19 vaccines are probably also less effective against the South African strain.
https://blogs.sciencemag.org/pipeline/archives/2021/01/29/jj-and-novavax-data

There remains a small, but real chance that COVID-19 is a Chinese-made biological weapon that leaked from one of their labs.
http://www.rationaloptimist.com/blog/a-real-investigation-into-the-origins-of-covid/

Is the ocean the ideal place for AI to live?

Project Natick, Vessel retrieval Stromness, Orkney. Microsoft – Tuesday 7th to Wednesday 15th of July 2020

Recently, I read about Microsoft’s “Project Natick,” in which the company made a data server in an airtight cylinder the size of a shipping container, lowered to the seafloor (117 feet deep) off the coast of Scotland, and monitored its performance for two years. At the end of the experiment, Microsoft found that the unit performed better than comparable datacenters on land. It turns out that submersible datacenters can more efficiently rid themselves of waste heat because water is a better conductor than air, and because temperatures are generally colder and much more consistent underwater than they are on the surface. And given the small, sealed nature of the cylinders, it is also possible to control their atmospheric contents, and to pump out all the oxygen, leaving the computer servers awash in pure, nitrogen gas. This lowers malfunction rates since oxygen is corrosive to computer chips. 

The project’s success has encouraged Microsoft to plan more elaborate experiments with submersible datacenters, which might culminate in profitable, commercial operations. It also got me thinking that, in the future, artificially intelligent machines (AIs) might prefer living on the high seas to living on land. This might in fact be the best arrangement for achieving harmony between intelligent machines, humans, and the environment. 

Map showing national territorial sea boundaries. Dark blue = under national ownership. Light blue = international waters.

A longstanding worry about AI is that it will wage war on humans for dominance of the planet: A map of the world will show that every scrap of land except Antarctica has been claimed by one human country or another, so how could machines ever carve out a nation of their own other than through military conquest? This view overlooks the fact that there remain vast expanses of ocean that are owned by no one. AIs that didn’t want to live under human laws could get ships, submarines and other types of watercraft, and move to international waters.

Floating wind turbines can be towed to a desired location and then tethered to the sea floor.

While permanently living at sea would be an impoverished, resource-scarce, and undesirable lifestyle for humans, it would suit AIs well. The lack of fresh water would be no bother since they wouldn’t need to drink, nor would the forced dependence on seafood (and the variable quantity and quality thereof) since they wouldn’t need to eat. The only nourishment AIs would need is electricity, which they could easily obtain at sea using solar panels, floating wind turbines, or ocean thermal energy conversion.

Out of those energy sources, I think the most practical will be solar power. By the time AIs exist and are ready to make their own communities at sea, solar panels will be much cheaper, better, and thinner than they are now, whereas wind turbines will still be massive, expensive and complex, and ocean thermal energy converters even more so. That leads us to the next question: which parts of the ocean get the most sunlight?

Average cloud cover map. Counterintuitively, red = cloudy, and blue = clear skies.

The map shows that the stretches of ocean between the Tropics get the most sunlight (dark blue shaded), while large areas in the temperate and subarctic zones are very cloudy (orange shaded). If we roughly overlay this map with the one showing national territorial waters, we see that the eastern Pacific between the Galapagos Islands and Easter Island, is an ideal location for AI to live, along with a large region of the Indian Ocean between Madagascar and Australia, and patches of the North and South Atlantic between Latin America and Africa.

However, it must be remembered that oceanic AI communities could still be threatening to humans if they occupied parts of the ocean rich in fish that we need to eat. That means another map overlay is necessary, this time relating to global fish stocks.

Global fisheries map. Green = presence of large numbers of fish.
Note that the map’s color-coding scheme measures human fishing intensity in orders of magnitude. Yellow and light green areas rarely get fishing boat visits.

Eyeballing those two maps, the ideal locations for floating AI communities shrink a little to make way for human fishing activity, but they don’t disappear. Huge patches of ocean, each measuring hundreds of thousands of square miles big, meet the three key criteria (in international waters, receive high levels of sunlight, do not occupy places humans need to access for food), and can be found in the eastern Pacific, Atlantic, and Indian Oceans.

But even if they had their freedom and a peaceful coexistence with land-based humans, what would AIs do in the middle of the oceans? What kind of economy could they possibly build? How would they sustain themselves, let alone grow in number? Answers that come to mind are: exploiting the natural resources of the sea and seafloor, and providing data-related services to humans.

The machines could sustainably harvest whatever sea life there was in their relatively barren regions of dominance and ship it to coastal seafood markets run by humans. They could also mine the minerals and metals on and under the seafloors beneath their floating communities and transport it by boat to the continents for sale to humans. In the longer term, machines might even find it profitable to build their own floating factories to manufacture finished goods for export. The data-related services would include a wide variety of things, from web hosting to database management to real-time data processing (reviewing all the digital products that Amazon Web Service provides is a good start to grasping what will be possible). Ocean-based machine communities would trade goods and services with humans in exchange for whatever they couldn’t obtain by themselves at sea, like new ships and computer servers that they could use to replace older ones and to expand their floating communities.

A simple ship like this could be used to collect solar power.

What exactly would the machines’ sea vessels look like, how big would they be, what features would they have, and how would they configure to form communities (or even cities)? It’s impossible to give specific answers at this point, but the vessels would surely vary in shape, accoutrements and size to reflect their functions, just as is the case for modern watercraft. For example, vessels meant to collect solar power would probably look like simple barges or low oil rigs. Ships dedicated to undersea mining and fishing would look like those use by humans, but with smaller or omitted superstructures. Cranes, hoses, ropes, and cables would be ubiquitous on the vessels since they’d be needed to transfer physical materials, fuel, and electricity between them, and to lash themselves together to form ship agglomerations of varying sizes.

Ships can attach to one another at sea to trade fuel and cargo.

The great danger to machine seasteads would be rough seas, which could capsize their vessels and bang them into each other with fatal force. For that reason, the ability to rapidly attach and detach from neighboring craft in the seastead will be vital, and each will need independent propulsion to prevent collisions. The ability to submerge would also provide an escape, since sea currents get less turbulent with depth. At 30 meters deep, the force of a raging storm that is producing large waves on the surface can barely be felt. It’s not much of a technical challenge to make vessels that can dive that deep, considering that modern military submarines can easily dive to depths greater than 200 meters. The ability to submerge would also be a useful defense against military attack.

Putting all of these considerations together, I can envision the basic form of a machine seastead. Starting at the ocean floor, we see a dark, barren expanse of sand, rocks, and gentle hills. There is no coral and very few fish. This is the aquatic equivalent of a desert, making it the perfect home for artificial life forms that don’t want to damage sensitive ecosystems.

Concept illustration for seafloor mining.

Various points on the seafloor glow with artificial white light, partly obscured by swirling clouds of sand. A closer inspection reveals them to be mining sites, where teams of wheeled machines and small submarines hovering low dig into the ground and sift through loose sand and rock to extract valuable metals and minerals. Near each site are bright-colored, vertical cables stretching from the seafloor upward, where they vanish into the darkness. The cables connect to surface ships and supply electricity and data to the mining machines far below. The mining machines can also use some of the cables to be hoisted up and down from the surface when needed.

An underwater data cable

A short distance from the mining sites, we see another cable, this time lying horizontally across the seafloor, and so long that it disappears into the darkness in both directions. It’s an ultra high-speed data cable that connects the machines to the continents thousands of miles away, which humans still dominate. At many points along the data cable’s length, we see thinner cables branching off from it perpendicularly and vertically, going towards the surface.

As we float upwards, the seafloor fades from view. The vertical cables are the only features visible for some time. The darkness finally yields to sunlight, at first very dim and then growing brighter as we near the surface. At a depth of 50 meters, we encounter many small submarines slightly bigger than shipping containers. They are full of powerful computer servers which jointly comprise a larger, artificially intelligent machine mind in the same way that the neurons make up your brain and support its consciousness. The subs usually stay at this depth, where the water is always cold and calm. Here they can efficiently radiate heat from their servers and be safe from forces that would suddenly jostle them and break their computer parts. Data cables from below plug into them, as do power cables from above. They are can control their own buoyancy, but typically use tethers to surface ships or the seafloor to stay in place. In emergencies, they can detach from one or both and move independently.

Breaking the surface, a vast fleet of vessels is visible, stretching from one end of the horizon to the other. Most of them are simple, medium-sized ships with flat, nearly featureless top decks covered in black solar panels. In place of a boxy superstructure, the typical “solar ship” has some antennae, satellite dishes, a short radar tower, and a crane all clustered at the stern end of its deck. These and the other ships in the fleet are lined with black rubber bumpers, some of which are simply large tires lashed to their sides. Few of the ship look high-tech or impressive in any way.

A floating dry dock with a ship inside of it.

A small percentage of the seasteading fleet is made up of different types of vessels. There is a large, floating dry dock that has raised a solar ship out of the water for maintenance. On board, robots of various shapes and sizes scrape barnacles off the latter’s hull and install new solar panels. Farther away, a vessel resembling an oil tanker uses one of its cranes to lift a load of rocks from the seafloor and to dump them into a open trapdoor on its top deck. The rocks are then mechanically and chemically processed by machines, separating valuable, pure metals from slag materials. The former will be put on merchant ships and sent to human port cities for sale, while the latter will be lowered back to the seafloor for safe disposal in a nearby geological subduction zone. The mineral processing ship is also one of the relative few that can’t submerge, meaning it has to stay on the surface during storms and carefully steer through the big waves. During such occasions, it at least has generous room for maneuver since most of the seasteading fleet sinks deep enough to not be a collision risk.

A fractal pattern

But because the weather is calm and sunny now, many of the ships in the fleet are tied to each other. They use flexible ropes for this, which can stretch and bend as the ships bob in the waves. Data and power cables are also enmeshed with the ropes, letting ships share those resources. From up in the sky, we can look down and see how the vessels are configured, and what the seastead as a whole looks like. The connections are irregular, and give the seastead an organic-looking and perhaps “fractal” shape. If we look closely, we can see the movement of individual vessels as they sever and form connections with neighbors, slowly move within the group, and reorient themselves when necessary. Ships use open channels that are free of connected vessels to move through the seastead quickly. Some vessels slowly sink beneath the surface and disappear, while others rise from the dark blue sea. The machine seastead is a dynamic, artificial superorganism that does no harm to humans or animals and gets all its energy from clean sources.

At high altitudes, we can see that the seastead covers as much area as a medium-sized human country like France or Pakistan. Maybe it can even be seen from space as a dark, irregular shape on the ocean.

Links:

  1. Details of Microsoft’s Project Natick
    https://news.microsoft.com/innovation-stories/project-natick-underwater-datacenter/
  2. More on ocean thermal energy conversion. Basically, it takes advantage of the temperature difference of seawater at different depths to generate electricity.
    https://www.britannica.com/technology/ocean-thermal-energy-conversion
  3. The forces of ocean waves diminish as you dive deeper into the sea. At 50 meters deep, even a raging surface storm can barely be felt.
    https://www.technology.org/2019/06/26/can-a-submarine-avoid-a-storm-by-sailing-under-it-how-deep-does-it-have-to-go-to-not-be-bothered-by-waves/

My future predictions (2021 iteration)

If it’s January, it means it’s time for me to update my big list of future predictions! I used the 2020 version of this document as a template, and made edits to it as needed. For the sake of transparency, I’ve indicated recently added content by bolding it, and have indicated deleted or moved content with strikethrough.

Like any futurist worth his salt, I’m going to put my credibility on the line by publishing a list of my future predictions. I won’t modify or delete this particular blog entry once it is published, and if my thinking about anything on the list changes, I’ll instead create a new, revised blog entry. Furthermore, as the deadlines for my predictions pass, I’ll reexamine them.

I’ve broken down my predictions by the decade. Any prediction listed under a specific decade will happen by the end of that decade, unless I specify some other date (e.g. – “X will happen early in this decade.”).

2020s

  • Better, cheaper solar panels and batteries (for grid power storage and cars) will make clean energy as cheap and as reliable as fossil fuel power for entire regions of the world, including some temperate zones. As cost “tipping points” are reached, it will make financial sense for tens of millions of private homeowners and electricity utility companies to install solar panels on their rooftops and on ground arrays, respectively. This will be the case even after government clean energy subsidies are inevitably retracted. However, a 100% transition to clean energy won’t finish in rich countries until the middle of the century, and poor countries will use dirty energy well into the second half of the century.
  • Fracking and the exploitation of tar sands in the U.S. and Canada will together ensure growth in global oil production until around 2030, at which time the installed base of clean energy and batteries will be big enough to take up the slack. There will be no global energy crisis.
  • This will be a bad decade for Russia as its overall population shrinks, its dependency ratio rises, and as low fossil fuel prices and sanctions keep hurting its economy. Russia will fall farther behind the U.S., China, and other leading countries in terms of economic, military, and technological might.
  • China’s GDP will surpass America’s, India’s population will surpass China’s, and China will never claim the glorious title of being both the richest and most populous country.
  • Improvements to smartphone cameras, mirrorless cameras, and perhaps light-field cameras will make D-SLRs obsolete. 
  • Augmented reality (AR) glasses that are much cheaper and better than the original Google Glass will make their market debuts and will find success in niche applications.
  • Virtual reality (VR) gaming will go mainstream as the devices get better and cheaper. It will stop being the sole domain of hardcore gamers willing to spend over $1,000 on hardware.
  • Vastly improved VR goggles with better graphics and no need to be plugged into desktop PCs will hit the market. They won’t display perfectly lifelike footage, but they will be much better than what we have today, and portable. 
  • “Full-immersion” audiovisual VR will be commercially available by the end of the decade. These VR devices will be capable of displaying video that is visually indistinguishable from real reality: They will have display resolutions (at least 60 pixels per degree of field of view), refresh rates, head tracking sensitivities, and wide fields of view (210 degrees wide by 150 degrees high) that together deliver a visual experience that matches or exceeds the limits of human vision. These high-end goggles won’t be truly “portable” devices because their high processing and energy requirements will probably make them bulky, give them only a few hours of battery life (or maybe none at all), or even require them to be plugged into another computer. Moreover, the tactile, olfactory, and physical movement/interaction aspects of the experience will remain underdeveloped.
  • “Deepfake” pornography will reach new levels of sophistication and perversion as it becomes possible to seamlessly graft the heads of real people onto still photos and videos of nude bodies that closely match the physiques of the actual people. New technology for doing this will let amateurs make high-quality deepfakes, meaning any person could be targeted. It will even become possible to wear AR glasses that interpolate nude, virtual bodies over the bodies real people in the wearer’s field of view to provide a sort of fake “X-ray-vision.” The AR glasses could also be used to apply other types of visual filters that degraded real people within the field of view.
  • LED light bulbs will become as cheap as CFL and even incandescent bulbs. It won’t make economic sense NOT to buy LEDs, and they will establish market dominance.
  • “Smart home”/”Wired home” technology will become mature and widespread in developed countries.
  • Video gaming will dispense with physical media, and games will be completely streamed from the internet or digitally downloaded. Business that exist just to sell game discs (Gamestop) will shut down.
  • Instead of a typical home entertainment system having a whole bunch of media discs, different media players and cable boxes, there will be one small, multipurpose box that, among other things, boosts WiFi to ensure the TV and all nearby devices can get signals at multi-Gb/s speeds.
  • Self-driving vehicles will start hitting the roads in large numbers in rich countries. The vehicles won’t drive as efficiently as humans (a lot of hesitation and slowing down for little or no reason), but they’ll be as safe as human drivers. Long-haul trucks that ply simple highway routes will be the first category of vehicles to be fully automated. The transition will be heralded by a big company like Wal-Mart buying 5,000 self-driving tractor trailers to move goods between its distribution centers and stores. Last-mile delivery–involving weaving through side streets, cities and neighborhoods, and physically carrying packages to peoples’ doors–won’t be automated until after this decade. Self-driving, privately owned passenger cars will stay few in number and will be owned by technophiles, rich people, and taxi cab companies.
  • Thanks to improvements in battery energy density and cost, and in fast-charging technology, electric cars will become cost-competitive with gas-powered cars this decade without government subsidies, leading to their rapid adoption. Electric cars are mechanically simpler and more reliable than gas-powered ones, which will hurt the car repair industry. Many gas stations will also go bankrupt or convert to fast charging stations.
  • Quality of life for people living and working in cities and near highways will improve as more drivers switch to quieter, emissionless electric vehicles. The noise reduction will be greatest in cities and suburbs where traffic moves slowly: https://cleantechnica.com/2016/06/05/will-electric-cars-make-traffic-quieter-yes-no/
  • Most new power equipment will be battery-powered, so machines like lawn mowers, leaf blowers, and chainsaws will be much quieter and less polluting than they are today. Batteries will be energy-dense enough to compete with gasoline in these use cases, and differences in overall equipment weight and running time will be insignificant. The notion of a neighbor shattering your sense of peace and quiet with loud yard work will get increasingly alien. 
  • A machine will pass the Turing Test by the end of this decade. The milestone will attract enormous amounts of attention and will lead to several retests, some of which the machine will fail, proving that it lacks the full range of human intelligence. It will lead to debate over the Turing Test’s validity as a measure of true intelligence (Ray Kurzweil actually talked about this phenomenon of “moving the goalposts” whenever we think about how smart computers are), and many AI experts will point out the existence of decades-long skepticism in the Turing Test in their community.
  • The best AIs circa 2029 won’t be able to understand and upgrade their own source codes. They will still be narrow AIs, albeit an order of magnitude better than the ones we have today.
  • Machines will become better than humans at the vast majority of computer, card, and board games. The only exceptions will be very obscure games or recently created games that no one has bothered to program an AI to play yet. But even for those games, there will be AIs with general intelligence and learning abilities that will be “good enough” to play as well as average humans by reading the instruction manuals and teaching themselves through simulated self-play.
  • The cost of getting your genome sequenced and expertly interpreted will drop below $1,000, and enough about the human genome will have been deciphered to make the cost worth the benefit for everyone. By the end of the decade, it will be common for newborns in rich countries to have their genomes sequenced.
  • Cheap DNA tests that can measure a person’s innate IQ and core personality traits with high accuracy will become widely available. There is the potential for this to cause social problems. 
  • At-home medical testing kits and diagnostic devices like swallowable camera-pills will become vastly better and more common.
  • Space tourism will become routine thanks to privately owned spacecraft. 
  • Marijuana will be effectively decriminalized in the U.S. Either the federal government will overturn its marijuana prohibitions, or some patchwork of state and federal bans will remain but be so weakened and lightly enforced that there will be no real government barriers to obtaining and using marijuana. 
  • By the end of this decade, photos of almost every living person will be available online (mostly on social media). Apps will exist that can scan through trillions of photos to find your doppelgangers. 
  • In 2029, the youngest Baby Boomer and the oldest Gen Xer will turn 65. 
  • Drones will be used in an attempted or successful assassination of at least one major world leader (Note: Venezuela’s Nicholas Maduro wasn’t high profile enough).

2030s

  • VR and AR goggles will become refined technologies and probably merge into a single type of lightweight device. Like smartphones today, anyone who wants the glasses in 2030 will have them. Even poor people in Africa will be able to buy them. A set of the glasses will last a day on a single charge under normal use.  
  • Augmented reality contact lenses will enter mass production and become widely available, though they won’t be as good as AR glasses and they might need remotely linked, body-worn hardware to provide them with power and data. https://www.inverse.com/article/31034-augmented-reality-contact-lenses
  • The bulky VR goggles of the 2020s will transform into lightweight, portable V.R. glasses thanks to improved technology. The glasses will display lifelike footage. However, the best VR goggles will still need to be plugged into other devices, like routers or PCs.
  • Wall-sized, thin, 8K or even 16K TVs will become common in homes in rich countries, and the TVs will be able to display 3D picture without the use of glasses. A sort of virtual reality chamber could be created at moderate cost by installing those TVs on all the walls of a room to create a single, wraparound screen.
  • Functional CRT TVs and computer monitors will only exist in museums and in the hands of antique collectors. This will also be true for DLP TVs. 
  • The video game industry will be bigger than ever and considered high art.
  • It will be standard practice for AIs to be doing hyperrealistic video game renderings, and for NPCs to behave very intelligently thanks to better AI. 
  • Books and computer tablets will merge into a single type of device that could be thought of as a “digital book.” It will be a book with several hundred pages made of thin, flexible digital displays (perhaps using ultra-energy efficient e-ink) instead of paper. At the tap of a button, the text on all of the pages will instantly change to display whichever book the user wanted to read at that moment. They could also be used as notebooks in which the user could hand write or draw things with a stylus, which would be saved as image or text files. The devices will fuse the tactile appeal of old-fashioned books with the content flexibility of tablet computers.
  • Loose-leaf sheets of “digital paper” will also exist thanks to the same technology.
  • Loneliness, social isolation, and other problems caused by overuse of technology and the atomized structure of modern life will be, ironically, cured to a large extent by technology. Chatbots that can hold friendly (and even funny and amusing) conversations with humans for extended periods, diagnose and treat mental illnesses as well as human therapists, and customize themselves to meet the needs of humans will become ubiquitous. The AIs will become adept at analyzing human personalities and matching lonely people with friends and lovers, at matching them with social gatherings (including some created by machines), and at recommending daily activities that will satisfy them, hour-by-hour. Machines will come to understand that constant technology use is antithetical to human nature, so in order to promote human wellness, they find ways to impel humans to get out of their houses, interact with other humans, and be in nature. Autonomous taxis will also be widespread and will have low fares, making it easier for people who are isolated due to low income or poor health (such as many elderly people) to go out.
  • Chatbots will steadily improve their “humanness” over the decade. The instances when AIs say or do something nonsensical will get less and less frequent. Dumber people, children, and people with some types of mental illness will be the first ones to start insisting their AIs are intelligent like humans. Later, average people will start claiming the same. By the end of the decade, a personal assistant AI like “Samantha” from the movie Her will be commercially available. AI personal assistants will have convincing, simulated personalities that seem to have the same depth as humans. Users will be able to pick from among personality profiles or to build their own.  
  • Chatbots will be able to have intelligent conversations with humans about politics and culture, to identify factually wrong beliefs, biases, and cognitive blind spots in individuals, and to effectively challenge them through verbal discussion and debate. The potential will exist for technology to significantly enlighten the human population and to reduce sociopolitical polarization. However, it’s unclear how many people will choose to use this technology. 
  • Turing-Test-capable chatbots will also supercharge the problem of online harassment, character assassination, and deliberate disinformation by spamming the internet with negative reviews, bullying messages, emails to bosses, and humiliating “deepfake” photos and videos of targeted people. Today’s “troll farms” where humans sit at computer terminals following instructions to write bad reviews for specific people or businesses will be replaced by AI trolls that can pump out orders of magnitude more content per day. And just as people today can “buy likes” for their social media accounts or business webpages, people in the future will be able, at low cost, to buy harassment campaigns against other people and organizations they dislike. Discerning between machine-generated and human-generated internet content will be harder and more important than ever.
  • House robots will start becoming common in rich countries. They will be slower at doing household tasks than humans, but will still save people hours of labor per week. They may or may not be humanoid. For the sake of safety and minimizing annoyance, most robots will do their work when humans aren’t around. As in, you would come home from work every day and find the floors vacuumed, the lawn mowed, and your laundered clothes in your dresser, with nary a robot in sight since it will have gone back into its closet to recharge. You would never hear the commotion of a clothes washing machine, a vacuum cleaner or a lawnmower. All the work would get done when you were away, as if by magic.
  • People will start having genuine personal relationships with AIs and robots. For example, people will resist upgrading to new personal assistant AIs because they will have emotional attachments to their old ones. The destruction of a helper robot or AI might be as emotionally traumatic to some people as the death of a human relative.
  • Farm robots that are better than humans at fine motor tasks like picking strawberries humans will start becoming widespread.  
  • Self-driving cars will become cheap enough and practical enough for average income people to buy, and their driving behavior will become as efficient as an average human. Over the course of this decade, there will be rapid adoption of self-driving cars in rich countries. Freed from driving, people will switch to doing things like watching movies/TV and eating. Car interiors will change accordingly. Road fatalities, and the concomitant demands for traffic police, paramedics, E.R. doctors, car mechanics, and lawyers will sharply decrease. The car insurance industry will shrivel, forcing consolidation. (Humans in those occupations will also face increasing levels of direct job competition from machines over the course of the decade.)
  • Private owners of autonomous cars will start renting them out while not in use as taxis and package delivery vehicles. Your personal, autonomous car will drive you to work, then spend eight hours making money for you doing side jobs, and will be waiting for you outside your building at the end of the day.
  • The “big box” business model will start taking over the transportation and car repair industry thanks to the rise of electric, self-driving vehicles and autonomous taxis in place of personal car ownership. The multitudes of small, scattered car repair shops will be replaced by large, centralized car repair facilities that themselves resemble factory assembly lines. Self-driving vehicles will drive to them to have their problems diagnosed and fixed, sparing their human owners from having to waste their time sitting in waiting rooms.
  • The same kinds of facilities will make inroads into the junk yard industry, as they would have all the right tooling to cheaply and rapidly disassemble old vehicles, test the parts for functionality, and shunt them to disposal or individual resale. (The days of hunting through junkyards by yourself for a car part you need will eventually end–it will all be on eBay. )
  • Car ownership won’t die out because it will still be a status symbol, and having a car ready in your driveway will always be more convenient than having to wait even just two minutes for an Uber cab to arrive at the curb. People are lazy.
  • The ad hoc car rental model exemplified by autonomous Uber cabs and private people renting out their autonomous cars when not in use faces a challenge since daily demand for cars peaks during morning rush hour and afternoon rush hour. In other words, everyone needs a car at the same time each day, so the ratio of cars : people can’t deviate much from, say, 1:2. Of course, if more people telecommuted (almost certain in the future thanks to better VR, faster broadband, and tech-savvy Millennials reaching middle age and taking over the workplace), and if flexible schedules became more widespread (also likely, but within certain limits since most offices can’t function efficiently unless they have “all hands on deck” for at least a few hours each day), the ratio could go even lower. However, there’s still a bottom limit to how few cars a country will need to provide adequate daily transportation for its people.
  • Private delivery services will get cheaper and faster thanks to autonomous vehicles.
  • Automation will start having a major impact on the global economy. Machines will compensate for the shrinkage of the working-age human population in the developed world. Countries with “graying” populations like Japan and Germany will experience a new wave of economic growth. Demand for immigrant laborers will decrease across the world because of machines.
  • There will be a worldwide increase in the structural unemployment rate thanks to better and cheaper narrow AIs and robots. A plausible scenario would be for the U.S. unemployment rate to be 10%–which was last the case at the nadir of the Great Recession–but for every other economic indicator to be strong. The clear message would be that human labor is becoming decoupled from the economy.
  • Combining all the best AI and robotics technologies, it will be possible to create general-purpose androids that could function better in the real world (e.g. – perform in the workplace, learn new things, interact with humans, navigate public spaces, manage personal affairs) than the bottom 10% of humans (e.g. – elderly people, the disabled, criminals, the mentally ill, people with poor language abilities or low IQs), and in some narrow domains, the androids will be superhuman (e.g. – physical strength, memory, math abilities). Note that businesses will still find it better to employ task-specific, non-human-looking robots instead of general purpose androids.
  • By the end of this decade, only poor people, lazy people, and conspiracy theorists (like anti-vaxxers) won’t have their genomes sequenced. It will be trivially cheap, and in fact free for many people (some socialized health care systems will fully subsidize it), and enough will be known about the human genome to make it worthwhile to have the information.
  • Computers will be able to accurately deduce a human’s outward appearance based on only a DNA sample. This will aid police detectives, and will have other interesting uses, such as allowing parents to see what their unborn children will look like as adults, or allowing anyone to see what they’d look like if they were of the opposite sex (one sex chromosome replaced). 
  • Trivially cheap gene sequencing and vastly improved knowledge of the human genome will give rise to a “human genome black market,” in which people secretly obtain DNA samples from others, sequence them, and use the data for their own ends. For example, a politician could be blackmailed by an enemy who threatened to publish a list of his genetic defects or the identities of his illegitimate children. Stalkers (of celebrities and ordinary people) would also be interested in obtaining the genetic information of the people they were obsessed with. It is practically impossible to prevent the release of one’s DNA since every discarded cup, bottle, or utensil has a sample. 
  • Markets will become brutally competitive and efficient thanks to AIs. Companies will sharply grasp consumer demand through real-time surveillance, and consumers will be alerted to bargains by their personal AIs and devices (e.g. – your AR glasses will visually highlight good deals as you walk through the aisles of a store). Your personal assistant AIs and robots will look out for your self-interest by countering the efforts of other AIs to sway your spending habits in ways that benefit companies and not you.
  • “Digital immortality” will become possible for average people. Personal assistant AIs, robot servants, and other monitoring devices will be able, through observation alone, to create highly accurate personality profiles of individual humans, and to anticipate their behavior with high fidelity. Voices, mannerisms and other biometrics will be digitally reproducible without any hint of error. Digital simulacra of individual humans will be further refined by having them take voluntary personality tests, and by uploading their genomes, brain scans and other body scans. Even if all of the genetic and biological data couldn’t be made sense of at the moment it was uploaded to an individual’s digital profile, there will be value in saving it since it might be decipherable in the future. (Note that “digital immortality” is not the same as “mind uploading.”)
  • Life expectancy will have increased by a few years thanks to pills and therapies that slightly extend human lifespan. Like, you take a $20 pill each day starting at age 20 and you end up dying at age 87 instead of age 84.
  • Global oil consumption will peak as people continue switching to other power sources.
  • Earliest possible date for the first manned Mars mission.
  • Movie subtitles and the very notion of there being “foreign language films” will become obsolete. Computers will be able to perfectly translate any human language into another, to create perfect digital imitations of any human voice, and to automatically apply CGI so that the mouth movements of people in video footage matches the translated words they’re speaking. The machines will also be able to reproduce detailed aspects of an actor’s speech, such as cadence, rhythm, tone and timbre, emotion, and accent, and to convey them accurately in another language.
  • Computers will also be able to automatically enhance and upscale old films by accurately colorizing them, removing defects like scratches, and sharpening or focusing footage (one technique will involve interpolating high-res still photos of long-dead actors onto the faces of those same actors in low-res moving footage). Computer enhancement will be so good that we’ll be able to watch films from the early 20th century with near-perfect image and audio clarity.
  • CGI will get so refined than moviegoers with 20/20 vision won’t be able to see the difference between footage of unaltered human actors and footage of 100% CGI actors.
  • Lifelike CGI and “performance capture” will enable “digital resurrections” of dead actors. Computers will be able to scan through every scrap of footage with, say, John Wayne in it, and to produce a perfect CGI simulacrum of him that even speaks with his natural voice, and it will be seamlessly inserted into future movies. Elderly actors might also license movie studios to create and use digital simulacra of their younger selves in new movies. The results will be very fascinating, but might also worsen Hollywood’s problem with making formulaic content.
  • China’s military will get strong enough to defeat U.S. forces in the western Pacific. This means that, in a conventional war for control of the Spratly Islands and/or Taiwan, China would have >50% odds of winning. This shift in the local balance of power does not mean China will start a conflict. 
  • The quality and sophistication of China’s best military technology will surpass Russia’s best technology in all or almost all categories. However, it will still lag the U.S. 

2040s

  • The world and peoples’ outlooks and priorities will be very different than they were in 2019. Cheap renewable energy will have become widespread and totally negated any worries about an “energy crisis” ever happening, except in exotic, hypothetical scenarios about the distant future. There will be little need for immigration thanks to machine labor and cross-border telecommuting (VR, telepresence, and remote-controlled robots will be so advanced that even blue-collar jobs involving manual labor will be outsourced to workers living across borders). Moreover, there will be a strong sense in most Western countries that they’re already “diverse enough,” and that there are no further cultural benefits to letting in more foreigners since large communities of most foreign ethnic groups will already exist within their borders. There will be more need than ever for strong social safety nets and entitlement programs thanks to technological unemployment. AI will be a central political and social issue. It won’t be the borderline sci-fi, fringe issue it was in 2019.
  • Automation, mass unemployment, wealth inequalities between the owners of capital and everyone else, and differential access to expensive human augmentation technologies (like genetic engineering) will produce overwhelming political pressure for some kind of wealth redistribution and social safety net expansion. Countries that have diligently made small, additive reforms as necessary over the preceding decades will be untroubled. However, countries that failed to adapt their political and economic systems will face upheaval.
  • 2045 will pass without the Technological Singularity happening. Ray Kurzweil will either celebrate his 97th birthday in a wheelchair, or as a popsicle frozen at the Alcor Foundation.
  • Supercomputers that match or surpass upper-level estimates of the human brain’s computational capabilities will cost a few hundred thousand to a few million dollars apiece, meaning tech companies and universities will be able to afford large numbers of them for AI R&D projects, accelerating progress in the field. Hardware will no longer be the limiting factor to building AGI. If it hasn’t been built yet, it will be due to failure to figure out how to arrange the hardware in the right way to support intelligent thought, and/or to a failure to develop the necessary software. 
  • With robots running the economy, it will be common for businesses to operate 24/7: restaurants will never close, online orders made at 3:00 am will be packed in boxes by 3:10 am, and autonomous delivery trucks will only stop to refuel, exchange cargo, or get preventative maintenance.
  • Advanced energy technology, robot servants, 3D printers, telepresence, and other technologies will allow people to live largely “off-grid” if they choose, while still enjoying a level of comfort that 2019 people would envy.
  • Recycling will become much more efficient and practical thanks to house robots properly cleaning, sorting, and crushing/compacting waste before disposing of it. Automated sorting machines at recycling centers will also be much better than they are today. Today, recycling programs are hobbled because even well-meaning humans struggle to remember which of their trash items are recyclable and which aren’t since the acceptable items vary from one municipality to the next, and as a result, recycling centers get large amounts of unusable material, which they must filter out at great cost. House robots would remember it perfectly.
  • Thanks to this diligence, house robots will also increase backyard composting, easing the burden on municipal trash services. 
  • It will be common for cities, towns and states to heavily restrict or ban human-driven vehicles within their boundaries. A sea change in thinking will happen as autonomous cars become accepted as “the norm,” and human-driven cars start being thought of as unusual and dangerous.
  • Over 90% of new car sales in developed countries will be for electric vehicles. Just as the invention of the automobile transformed horses into status goods used for leisure, the rise of electric vehicles will transform internal combustion vehicles into a niche market for richer people. 
  • A global “family tree” showing how all humans are related will be built using written genealogical records and genomic data from the billions of people who have had their DNA sequenced. It will become impossible to hide illegitimate children, and it will also become possible for people to find “genetic doppelgangers”–other people they have no familial relationship to, but with whom, by some coincidence, they share a very large number of genes. 
  • Improved knowledge of human genetics and its relevance to personality traits and interests will strengthen AI’s ability to match humans with friends, lovers, and careers. Rising technological unemployment will create a need for machines to match human workers with the remaining jobs in as efficient a manner as possible.
  • Realistic robot sex bots that can move and talk will exist. They won’t perfectly mimic humans, but will be “good enough” for most users. Using them will be considered weird and “for losers” at first, but in coming decades it will go mainstream, following the same pattern as Internet dating. [If we think of sex as a type of task, and if we agree that machines will someday be able to do all tasks better than humans, then it follows that robots will be better than humans at sex.]  

2050s

  • This is the earliest possible time that AGI/SAI will be invented. It will not be able to instantly change everything in the world or to initiate a Singularity, but it will rapidly grow in intelligence, wealth, and power. It will probably be preceded by successful computer simulations of the brains of progressively more complex model organisms, such as flatworms, fruit flies, and lab rats.
  • Humans will be heavily dependent upon their machines for almost everything (e.g. – friendship, planning the day, random questions to be answered, career advice, legal counseling, medical checkups, driving cars), and the dependency will be so ingrained that humans will reflexively assume that “The Machines are always right.” Consciously and unconsciously, people will yield more and more of their decision-making and opinion-forming to machines, and find that they and the world writ large are better off for it. This will be akin to having an angel on your shoulder watching your surroundings and watching you, and giving you constructive advice all the time. 
  • In the developed world, less than 50% of people between age 22 and 65 will have gainful full-time jobs. However, if unprofitable full-time jobs that only persist thanks to government subsidies (such as someone running a small coffee shop and paying the bills with their monthly UBI check) and full-time volunteer “jobs” (such as picking up trash in the neighborhood) are counted, most people in that age cohort will be “doing stuff” on a full-time basis.  
  • The doomsaying about Global Warming will start to quiet down as the world’s transition to clean energy hits full stride and predictions about catastrophes from people like Al Gore fail to pan out by their deadlines. Sadly, people will just switch to worrying about and arguing about some new set of doomsday prophecies about something else.
  • By almost all measures, standards of living will be better in 2050 than today. People will commonly have all types of wonderful consumer devices and appliances that we can’t even fathom. However, some narrow aspects of daily life are likely to worsen, such as overcrowding and further erosion of the human character. Just as people today have short memories and take too many things for granted, so shall people in the 2050s fail to appreciate how much the standard of living has risen since today, and they will ignore all the steady triumphs humanity has made over its problems, and by default, people will still believe the world is constantly on the verge of collapsing and that things are always getting worse.
  • Cheap desalination will provide humanity with unlimited amounts of drinking water and end the prospect of “water wars.” 
  • Mass surveillance and ubiquitous technology will have minimized violent crime and property crime in developed countries: It will be almost impossible to commit such crimes without a surveillance camera or some other type of sensor detecting the act, or without some device recording the criminal’s presence in the area at the time of the act. House robots will contribute by effectively standing guard over your property at night while you sleep. 
  • It will be common for people to have health monitoring devices on and inside of their bodies that continuously track things like their heart rate, blood pressure, respiration rate, and gene expression. If a person has a health emergency or appears likely to have one, his or her devices will send out a distress signal alerting EMS and nearby random citizens. If you walked up to such a person while wearing AR glasses, you would see their vital statistics and would receive instructions on how to assist them (i.e. – How to do CPR). Robots will also be able to render medical aid. 
  • Cities and their suburbs across the world will have experienced massive growth since 2019. Telepresence, relatively easy off-grid living, and technological unemployment will not, on balance, have driven more people out of metro areas than have migrated into them. Farming areas full of flat, boring land will have been depopulated, and many farms will be 100% automated. The people who choose to leave the metro areas for the “wilderness” will concentrate in rural areas (including national parks) where the climate is good, the natural scenery is nice, and there are opportunities for outdoor recreation. Real estate prices will, in inflation-adjusted terms, be much higher in most metro areas and places with natural beauty than they were in 2020 because the “supply” of those prime locations is almost fixed, whereas the demand for them is elastic and will rise thanks to population growth, rising incomes, and the aforementioned technology advancements.
  • Therapeutic cloning and stem cell therapies will become useful and will effectively extend human lifespan. For example, a 70-year-old with a failing heart will be able to have a new one grown in a lab using his own DNA, and then implanted into his chest to replace the failing original organ. The new heart will be equivalent to what he had when at age 18 years, so it will last another 52 years before it too fails. In a sense, this will represent age reversal to one part of his body.
  • The first healthy clone of an adult human will be born.
  • Many factories, farms, and supply chains will be 100% automated, and it will be common for goods to not be touched by a human being’s hands until they reach their buyers. Robots will deliver Amazon packages to your doorstep and even carry them into your house. Items ordered off the internet will appear inside your house a few hours later, as if by magic. 
  • Smaller versions of the robots used on automated farms will be available at low cost to average people, letting them effortlessly create backyard gardens. This will boost global food production and let people have greater control over where their food comes from and what it contains. 
  • The last of America’s Cold War-era weapon platforms (e.g. – the B-52 bomber, F-15 fighter, M1 Abrams tank, Nimitz aircraft carrier) will finally be retired from service. There will be instances where four generations of people from the same military family served on the same type of plane or ship. 
  • Cheap guided bullets, which can make midair course changes and be fired out of conventional man-portable rifles, will become common in advanced armies. 
  • Personal “cloaking devices” made of clothes studded with pinhole cameras and thin, flexible sheets of LEDs, colored e-ink, or some metamaterial with similar abilities will be commercially available. The cameras will monitor the appearance of the person’s surroundings and tell the display pixels to change their colors to match. Ski masks made of the same material would let wearers change their facial features, fooling most face recognition cameras and certainly fooling the unaided eyes of humans. The pixels could also be made to glow bright white, allowing the wearer to turn any part of his body into a flashlight. 
  • Powered exoskeletons will become practical for a wide range of applications, mainly due to improvements in batteries. For example, a disabled person could use a lightweight exoskeleton with a battery the size of a purse to walk around for a whole day on a single charge, and a soldier in a heavy-duty exoskeleton with a large backpack battery could do a day of marching on a single charge. (Note: Even though it will be technologically possible to equip infantrymen with combat exoskeletons, armies might reject the idea due to other impracticalities.)
  • There will be no technological or financial barrier to building powered combat exoskeletons that have cloaking devices. 
  • The richest person alive will achieve a $1 trillion net worth.
  • It will be technologically and financially feasible for small aircraft to produce zero net carbon emissions. The aircraft might use conventional engines powered by carbon-neutral synthetic fossil fuels that cost no more than normal fossil fuels, or they might have electric engines and very energy-dense batteries or fuel cells.

2060s

  • Machines will be better at satisfyingly matching humans with fields of study, jobs, friends, romantic partners, hobbies, and daily activities than most humans can do for themselves. Machines themselves will make better friends, confidants, advisers, and even lovers than humans. Additionally, machines will be smarter and more skilled at humans in most areas of knowledge and types of work. A cultural sea change will happen, in which most humans come to trust, rely upon, defend, and love machines.
  • House robots and human-sized worker robots will be as strong, agile, and dexterous as most humans, and their batteries will be energy-dense enough to power them for most of the day. A typical American family might have multiple robot servants that physically follow around the humans each day to help with tasks. The family members will also be continuously monitored and “followed” by A.I.s embedded in their portable personal computing devices and possibly in their bodies. 
  • Cheap home delivery of groceries, robot chefs, and a vast trove of free online recipes will enable people in average households to eat restaurant-quality meals at home every day, at low cost. Predictive algorithms that can appropriately choose new meals for humans based on their known taste preferences and other factors will determine the menu, and many people will face a culinary “satisfaction paradox.”
  • Machines will understand humans individually and at the species level better than humans understand themselves. They will have highly accurate personality models of most humans along with a comprehensive grasp of human sociology, human decision-making, human psychology, human cognitive biases, and human nature, and will pool the information to accurately predict human behavior. A nascent version of a 1:1 computer simulation of the Earth–with the human population modeled in great detail–will be created.
  • Machines will be better teachers than most trained humans. The former will have much sharper grasps of their pupils’ individual strengths, weaknesses, interests, and learning styles, and will be able to create and grade tests in a much fairer and less biased manner than humans. Every person will have his own tutor. 
  • There will be a small, permanent human presence on the Moon.
  • If a manned Mars mission hasn’t happened yet, then there will be intense pressure to do so by the centennial of the first Moon landing (1969).
  • The worldwide number of supercentenarians–people who are at least 110 years old–will be sharply higher than it was in 2019: Their population size could be 10 times bigger or more. 
  • Advances in a variety of technologies will make it possible to cryonically freeze humans in a manner that doesn’t pulverize their tissue. However, the technology needed to safely thaw them out won’t be invented for decades. 
  • China will effectively close the technological, military, and standard of living gaps with other developed countries. Aside from the unpleasantness of being a more crowded place, life in China won’t be worse overall than life in Japan or the average European country. Importantly, China’s pollution levels will be much lower than they are today thanks to a variety of factors.
  • Small drones (mostly aerial) will have revolutionized warfare, terrorism, assassinations, and crime and will be mature technologies. An average person will be able to get a drone of some kind that can follow his orders to find and kill other people or to destroy things.
  • Countermeasures against those small drones will also have evolved, and might include defensive drones and mass surveillance networks to detect drone attacks early on. The networks would warn people via their body-worn devices of incoming drone attacks or of sightings of potentially hostile drones. The body-worn devices, such as smartphones and AR glasses, might even have their own abilities to automatically detect drones by sight and sound and to alert their wearers.

2070s

  • 100 years after the U.S. “declared war” on cancer, there still will not be a “cure” for most types of cancer, but vaccination, early detection, treatment, and management of cancer will be vastly better, and in countries with modern healthcare systems, most cancer diagnoses will not reduce a person’s life expectancy. Consider that diabetes and AIDS were once considered “death sentences” that would invariably kill people within a few years of diagnosis, until medicines were developed that transformed them into treatable, chronic health conditions. 
  • Hospital-acquired infections will be far less of a problem than they are in 2020 thanks to better sterilization practices, mostly made possible by robots.
  • It will be technologically and financially feasible for large commercial aircraft to produce zero net carbon emissions. The aircraft might use conventional engines powered by synthetic fossil fuels, or they might have electric engines and very energy-dense batteries or fuel cells. 
  • Digital or robotic companions that seem (or actually are) intelligent, funny, and loving will be easier for humans to associate with than other humans.
  • Technology will enable the creation of absolute surveillance states, where all human behavior is either constantly monitored or is inferred with high accuracy based on available information. Even a person’s innermost thoughts will be knowable thanks to technologies that monitor him or her for the slightest things like microexpressions, twitches, changes in voice tone, and eye gazes. When combined with other data regarding how the person spends their time and money, it will be possible to read their minds. The Thought Police will be a reality in some countries.  

2100

  • Humans probably won’t be the dominant intelligent life forms on Earth.
  • Latest possible time that AGI/SAI will be invented. By this point, computer hardware will so powerful that we could do 1:1 digital simulations of human brains. If our AI still falls far short of human-like general intelligence and creativity, then it might be that only organic substrates have the necessary properties to support them.
  • The worst case scenario is that AGI/Strong AI will have not been invented yet, but thousands of different types of highly efficient, task-specific Narrow AIs will have (often coupled to robot bodies), and they will fill almost every labor niche better than human workers ever could (“Death by a Thousand Cuts” job automation scenario). Humans grow up in a world where no one has to work, and the notion of drudge work, suffering through a daily commute, and involuntarily waking up at 6:00 am five days a week is unfathomable. Every human will have machines that constantly monitor them or follow them around, and meet practically all their needs.
  • Telepresence technology will also be very advanced, allowing humans to do nearly any task remotely, from any other place in the world, in safety and comfort. This will include cognitive tasks and hands-on tasks. If any humans still have jobs, they’ll be able to work from anywhere.
  • The world could in many ways resemble Ray Kurzweil’s predicted Post-Singularity world. However, the improvements and changes will have accrued thanks to decades of AGI/Strong AI steady effort. Everything will not instantly change on DD/MM/2045 as Kurzweil suggests it will.
  • Hundreds of millions, and possibly billions, of “digitally immortal avatars” of dead humans will exist, and you will be able to interact with them through a variety of means (in FIVR, through devices like earpieces and TV screens, in the real world if the avatar takes over an android body resembling the human it was based on). 
  • A weak sort of immortality will be available thanks to self-cloning, immortal digital avatars, and perhaps mind uploading. You could clone yourself and instruct your digital avatar–which would be a machine programmed with your personality and memories–to raise the clone and ensure it developed to resemble you. Your digital avatar might have an android body or could exist in a disembodied state. 
  • It will be possible to make clones of humans using only their digital format genomic data. In other words, if you had a .txt file containing a person’s full genetic code, you could use that by itself to make a living, breathing clone. Having samples of their cells would not be necessary. 
  • The “DNA black market” that arose in the 2030s will pose an even bigger threat since it will be now possible to use DNA samples alone or their corresponding .txt files to clone a person or to produce a sperm or egg cell and, in turn, a child. Potential abuses include random people cloning or having the children of celebrities they are obsessed with, or cloning billionaires in the hopes of milking the clones for money. Important people who might be targets of such thefts will go to pains to prevent their DNA from being known. Since dead people have no rights, third parties might be able to get away with cloning or making gametes of the deceased.
  • Life expectancy escape velocity and perhaps medical immortality will be achieved. It will come not from magical, all-purpose nanomachines that fix all your body’s cells and DNA, but from a combination of technologies, including therapeutic cloning of human organs, cybernetic replacements for organs and limbs, and stem cell therapies that regenerate ageing tissues and organs inside the patient’s body. The treatments will be affordable in large part thanks to robot doctors and surgeons who work almost for free, and to medical patents expiring.
  • All other aspects of medicine and healthcare will have radically advanced. There will be vaccines and cures for almost all contagious diseases. We will be masters of human genetic engineering and know exactly how to produce people that today represent the top 1% of the human race (holistically combining IQ, genetic health, physical attractiveness, and likable/prosocial personality traits). However, the value of even a genius-IQ human will be questionable since intelligent machines will be so much smarter.
  • Augmentative cybernetics (including direct brain-to-computer links) will exist and be in common use.
  • Full-immersion virtual reality (FIVR) will exist wherein AI game masters constantly tailor environments, NPCs and events to suit each player’s needs and to keep them entertained. Every human will have his own virtual game universe where he’s #1. With no jobs in the real world to occupy them, it’s quite possible that a large fraction of the human race will willingly choose to live in FIVR. (Related to the satisfaction paradox) Elements of these virtual environments could be pornographic and sexual, allowing people to gratify any type of sexual fetish or urge with computer-generated scenarios and partners. 
  • More generally, AIs and humans whose creativity is turbocharged by machines will create enjoyable, consumable content (e.g. – films, TV shows, songs, artwork, jokes, new types of meals) faster than non-augmented humans can consume it. As a simple example of what this will be like, assume you have 15 hours of free time per day, that you love spending it listening to music, and each day, your favorite bands produce 16 hours worth of new songs that you really like.
  • The vast majority of unaugmented human beings will no longer be assets that can invent things and do useful work: they will be liabilities that do (almost) everything worse than intelligent machines and augmented humans. Ergo, the size of a nation’s human population will subtract from its economic and military power, and radical shifts in geopolitics are possible. Geographically large but sparsely populated countries like Russia, Australia and Canada might become very strong.
  • The transition to green energy sources will be complete, and humans will no longer be net emitters of greenhouse gases. The means will exist to start reducing global temperatures to restore the Earth to its pre-industrial state, but people will resist because they will have gotten used to the warmer climate. People living in Canada and Russia won’t want their countries to get cold again.
  • Synthetic meat will taste no different from animal meat, and will be at least as cheap to make. The raising and/or killing of animals for food will be be illegal in many countries, and trends will clearly show the practice heading for worldwide ban. 
  • The means to radical alter human bodies, alter memories, and alter brain structures will be available. The fundamental bases of human existence and human social dynamics will change unpredictably once differences in appearance/attractiveness, intelligence, and personality traits can be eliminated at will. Individuals won’t be defined by fixed attributes anymore. 
  • Brain implants will make “telepathy” possible between humans, machines and animals. Computers, sensors and displays will be embedded everywhere in the built environment and in nature, allowing humans with brain implants to interface with and control things around them through thought alone. 
  • Brain implants and brain surgeries will also be used to enhance IQ, change personality traits, and strengthen many types of skills. 
  • Technologically augmented humans and androids will have many abilities and qualities that ancient people considered “Godlike,” such as medical immortality, the ability to control objects by thought, telepathy, perfect memories, and superhuman senses.
  • Flying cars designed to carry humans could be common, but they will be flown by machines, not humans. Ground vehicles will retain many important advantages (fuel efficiency, cargo capacity, safety, noise level, and more) and won’t become obsolete. Instead of flying cars, it’s more likely that there will be millions of small, autonomous helicopters and VTOL aircraft that will cheaply ferry people through dense, national networks of helipads and airstrips. Autonomous land vehicles would take take passengers to and from the landing sites. (https://www.militantfuturist.com/why-flying-cars-never-took-off-and-probably-never-will/
  • The notion of vehicles (e.g. – cars, planes, and boats) polluting the air will be an alien concept. 
  • Advanced nanomachines could exist.
  • Vastly improved materials and routine use of very advanced computer design simulations (including simulations done in quantum computers) will mean that manufactured objects of all types will be optimally engineered in every respect, and might seem to have “magical” properties. For example, a car will be made of hundreds of different types of alloys, plastics, and glass, each optimized for a different part of the vehicle, and car recalls will never happen since the vehicles will undergo vast amounts of simulated testing in every conceivable driving condition in 1:1 virtual simulations of the real world. 
  • Design optimization and the rise of AGI consumption will virtually eliminate planned obsolescence. Products that were deliberately engineered to fail after needlessly short periods, and “new” product lines that were no better than what they replaced, but had non-interchangeable part sizes would be exposed for what they were, and AGI consumers would refuse to buy them. Production will become much more efficient and far fewer things will be thrown out. 
  • Relatively cheap interplanetary travel (probably just to Mars and to space stations and moons that are about as far as Mars) will exist.
  • Androids that are outwardly indistinguishable from humans will exist, and humans will hold no advantages over them (e.g. – physical dexterity, fine motor control, appropriateness of facial expressions, capacity for creative thought). Some androids will also be indistinguishable to the touch, meaning they will seem to be made of supple flesh and will be the same temperature as human bodies. However, their body parts will not be organic.
  • Sex robots will be indistinguishable from humans.
  • Robots that are outwardly identical to sci-fi and fantasy characters and extinct animals, like grey aliens, elves, and dinosaurs, will exist and will occasionally be seen in public. Some weird person will want their robot butler to look like bigfoot. 
  • Machines that are outwardly indistinguishable from animals will also exist, and they will have surveillance and military applications. 
  • Drones, miniaturized smart weapons, and AIs will dominate warfare, from the top level of national strategy down to the simplest act of combat. The world’s strongest military could, with conventional weapons alone, destroy most of the world’s human population in a short period of time. 
  • The construction and daily operation of prisons will have been fully automated, lowering the monetary costs of incarceration. As such, state prosecutors and judges will no longer feel pressure to let accused criminals have plea deals or to give them shorter prison sentences to ease the burdens of prison overcrowding and high overhead costs. 
  • The term “millionaire” will fall out of use in the U.S. and other Western countries since inflation will have rendered $1 million USD only as valuable as $90,000 USD was in 2019 (assuming a constant inflation rate of 3.0%).
  • There will still be major wealth and income inequality across the human race. However, wealth redistribution, better government services, advances in industrial productivity, and better technologies will ensure that even people in the bottom 1% have all their basic and intermediate life needs meet. In many ways, the poor people of 2100 will have better lives than the rich people of 2020.

2101 – 2200 AD

  • Humans will definitely stop being the dominant intelligent life forms on Earth. 
  • Many “humans” will be heavily augmented through genetic engineering, other forms of bioengineering, and cybernetics. People who outwardly look like the normal humans of today might actually have extensive internal modifications that give them superhuman abilities. Non-augmented, entirely “natural” humans like people in 2019 will be looked down upon in the same way you might today look at a very low IQ person with sensory impairments. Being forced by your biology to incapacitate yourself for 1/3 of each day to sleep will be tantamount to having a medical disability. 
  • Due to a reduced or nonexistent need for sleep among intelligent machines and augmented humans and to the increased interconnectedness of the planet, global time zones will become much less relevant. It will be common for machines, humans, businesses, and groups to use the same clock–probably Coordinated Universal Time (UTC)–and for activity to proceed on a 24/7 basis, with little regard of Earth’s day/night cycle. 
  • Physical disabilities and defects of appearance that cause untold anguish to people in 2019 will be easily and cheaply fixable. For example, male-pattern baldness and obesity will be completely ameliorated with minor medical interventions like pills or outpatient surgery. Missing or deformed limbs will be easily replaced, all types of plastic surgery (including sex reassignment) will be vastly better and cheaper than today, and spinal cord damage will be totally repairable. The global “obesity epidemic” will disappear. Transsexual people will be able to seamlessly alter their bodies to conform with their preferred genders, or to alter their brains so their gender identities conform with the bodies they were born with. 
  • All sleep disorders will be curable thanks to cybernetics that can use electrical pulses to quickly initiate sleep states in human brains. The same kinds of technologies will also reduce or eliminate the need for humans to sleep, and for people to control their dreams. 
  • Brain-computer interfaces will let people control, pre-program, and, to a limited extent, record their dreams. 
  • Almost all of today’s diseases will be cured.
  • The means to halt and reverse human aging will be created. The human population will come to be dominated by people who are eternally young and beautiful. 
  • Humans and machines will be immortal. Intelligent beings will find it terrifying and tragic to contemplate what it was like for humans in the past, who lived their lives knowing they were doomed to deteriorate and die. 
  • Extreme longevity, better reproductive technologies that eliminate the need for a human partner to have children, and robots that do domestic work and provide companionship (including sex) will weaken the institution of marriage more than any time in human history. An indefinite lifetime of monogamy will be impossible for most people to commit to. 
  • At reasonable cost, it will be possible for women to create healthy, genetically related children at any point in their lives, and without using the 2019-era, pre-menopausal egg freezing technique. For example, a 90-year-old, menopausal woman will be able to use reproductive technologies to make a baby that shares 50% of her DNA. 
  • Immortality, the automation of work, and widespread material abundance will completely transform lifestyles. With eternity to look forward to, people won’t feel pressured to get as rich as possible as quickly as possible. As stated, marriage will no longer be viewed as a lifetime commitment, and serial monogamy will probably become the norm. Relationships between parents and offspring will change as longevity erases the disparities in generational outlook and maturity that traditionally characterize parent-child interpersonal dynamics (e.g. – 300-year-old dad doesn’t know any better than his 270-year-old son). The “factory model” of public education–defined by conformity, rote memorization, frequent intelligence testing, and curricula structured to serve the needs of the job market–will disappear. The process of education will be custom-tailored to each person in terms of content, pacing, and style of instruction. Students will be much freer to explore subjects that interest them and to pursue those that best match their talents and interests. 
  • Radically extended human lifespans mean it will become much more common to have great-grandparents around. A cure for aging will also lead to families where members separated in age by many decades look the same age and have the same health. Additionally, older family members won’t be burdensome since they will be healthy.
  • Thanks to radical genetic engineering, there will be “human-looking,” biological people among us that don’t belong to our species, Homo sapiens. Examples could include engineered people who have 48 chromosomes instead of 46, people whose genomes have been shortened thanks to the deletion of junk DNA, or people who look outwardly human but who have radically different genes within their 46 chromosomes, so they have bird-like lungs. Such people wouldn’t be able to naturally breed with Homo sapiens, and would belong to new hominid species. 
  • Extinct species for which we have DNA samples (ex – from passenger pigeons on display in a museum) will “resurrected” using genetic technology.
  • The technology for safely thawing humans out of cryostasis and returning them to good health will be created. 
  • Suspended animation will become a viable alternative to suicide. Miserable people could “put themselves under,” with instructions to not be revived until the ill circumstances that tormented them had disappeared or until cures for their mental and medical problems were found. 
  • A sort of “time travel” will become possible thanks to technology. Suspended animation will let people turn off their consciousnesses until any arbitrary date in the future. From their perspective, no time will have elapsed between being frozen and being thawed out, even if hundreds of years actually passed between those two events, meaning the suspended animation machine will subjectively be no different from a time machine to them. FIVR paired with data from the global surveillance networks will let people enter highly accurate computer simulations of the past. The data will come from sources like old maps, photos, videos, and the digital avatars of people, living and dead. The computers simulations of past eras will get less accurate as the dates get more distant thanks to a paucity of data.
  • It will be possible to upload human minds to computers. The uploads will not share the same consciousness as their human progenitors, and will be thought of as “copies.” Mind uploads will be much more sophisticated than the digitally immortal avatars that will come into existence in the 2030s.
  • Different types of AGIs with fundamentally different mental architectures will exist. For example, some AGIs will be computer simulations of real human brains, while others will have totally alien inner workings. Just as a jetpack and a helicopter enable flight through totally different approaches, so will different types of AGIs be capable of intelligent thought. 
  • Gold, silver, and many other “precious metals” will be worth far less than today, adjusting for inflation, because better ways of extracting (including from seawater) them will have been developed. Space mining might also massively boost supplies of the metals, depressing prices. Diamonds will be nearly worthless thanks to better techniques for making them artificially. 
  • The first non-token quantities of minerals derived from asteroid mining will be delivered to the Earth’s surface. (Finding an asteroid that contains valuable minerals, altering its orbit to bring it closer to Earth, and then waiting for it to get here will take decades. No one will become a trillionaire from asteroid mining until well into the 22nd century.)
  • Intelligent life from Earth will colonize the entire Solar System, all dangerous space objects in our System will be found, the means to deflect or destroy them will be created, and intelligent machines will redesign themselves to be immune to the effects of radiation, solar flares, gamma rays, and EMP. As such, natural phenomena (including global warming) will no longer threaten the existence of civilization.  Intelligent beings will find it terrifying and tragic to contemplate what it was like for humans in the past, who were confined to Earth and at the mercy of planet-killing disasters. 
  • “End of the World” prophecies will become far less relevant since civilization will have spread beyond Earth and could be indefinitely self-sustaining even if Earth were destroyed. Some conspiracy theorists and religious people would deal with this by moving on to belief in “End of the Solar System” prophecies, but these will be based on extremely tenuous reasoning. 
  • The locus of civilization and power in our Solar System will shift away from Earth. The vast majority of intelligent life forms outside of Earth will be nonhuman. 
  • A self-sustaining, off-world industrial base will be created.
  • Spy satellites with lenses big enough to read license plates and discern facial features will be in Earth orbit. 
  • Space probes made in our Solar System and traveling at sub-light speeds will reach nearby stars.
  • All of the useful knowledge and great works of art that our civilization has produced or discovered could fit into an advanced memory storage device the size of a thumb drive. It will be possible to pair this with something like a self-replicating Von Neumann Probe, creating small, long-lived machines that would know how to rebuild something exactly like our civilization from scratch. Among other data, they would have files on how to build intelligent machines and cloning labs, and files containing the genomes and mind uploads of billions of unique humans and non-human organisms. Copies of existing beings and of long-dead beings could be “manufactured” anywhere, and loaded with the personality traits and memories of their predecessors. Such machines could be distributed throughout our Solar System as an “insurance policy” against our extinction, or sent to other star systems to seed them with life. Some of the probes could also be hidden in remote, protected locations on Earth.
  • We will find out whether alien life exists on Mars and the other celestial bodies in our Solar System. 
  • Intelligent machines will get strong enough to destroy the human race, though it’s impossible to assign odds to whether they’ll choose to do so.
  • If the “Zoo Hypothesis” is right, and if intelligent aliens have decided not to talk to humans until we’ve reached a high level of intellect, ethics, and culture, then the machine-dominated civilization that will exist on Earth this century might be advanced enough to meet their standards. Uncontrollable emotions and impulses, illogical thinking, tribalism, self-destructive behavior, and fear of the unknown will no longer govern individual and group behavior. Aliens could reveal their existence knowing it wouldn’t cause pandemonium. 
  • The government will no longer be synonymous with slowness and incompetence since all bureaucrats will be replaced by machines.
  • Technology will be seamlessly fused with humans, other biological organisms, and the environment itself.  
  • It will be cheaper and more energy-efficient to grow or synthesize almost all types of food in labs or factories than to grow and harvest it in traditional, open-air farms. Shielded from the weather and pests and not dependent on soil quality, the amounts and prices of foods will be highly consistent over time, and worries about farmland muscling out or polluting natural ecosystems will vanish. Animals will no longer be raised for food. Not only will this benefit animals, but it will benefit humans since it will eliminate a a major source of communicable disease (e.g. – new influenza strains originate in farm animals and, thanks to close contact with human farmers, evolve to infect people thanks to a process called “zoonosis”). Additionally, the means will exist to cheaply and artificially produce organic products, like wool and wood.
  • A global network of sensors and drones will identify and track every non-microscopic species on the planet. Cryptids like “bigfoot” and the “Loch Ness Monster” will be definitively proven to not exist. The monitoring network will also make it possible to get highly accurate, real-time counts of entire species populations. Mass gathering of DNA samples–either taken directly from organisms or from biological residue they leave behind–will also allow the full genetic diversity of all non-microscopic species to be known. 
  • That same network of sensors and machines will let us monitor the health of all the planet’s ecosystems and to intervene to protect any species. Interventions could include mass, painless sterilizations of species that are throwing the local ecology out of balance, mass vaccinations of species suffering through disease epidemics, reintroductions of extinct species, or widescale genetic engineering of a species. 
  • The technology and means to implement David Pearce’s global “benign stewardship” of nonhuman organic life will become available.  (https://youtu.be/KDZ3MtC5Et8) After millennia of inflicting damage and pain to the environment and other species, humanity will have a chance to inaugurate an era free of suffering.
  • The mass surveillance network will also look skyward and see all anomalous atmospheric phenomena and UFOs.
  • Robots will clean up all of the garbage created in human history. 
  • Every significant archaeological site will be excavated and every shipwreck found. There will be no work left for people in the antiquities. 
  • Dynamic traffic lane reversal will become the default for all major roadways, sharply increasing road capacity without compromising safety. Autonomous cars that can instantly adapt to changes in traffic direction and that can easily avoid hitting each other even at high speeds will enable the transformation.

How Ray Kurzweil’s 2019 predictions are faring (pt 4)

This is the fourth…and LAST…entry in my series of blog posts analyzing the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. You can view the previous installments of this series here:

Part 1

Part 2

Part 3

“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”

MOSTLY RIGHT

Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.

While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.

If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.

Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.

“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”

MOSTLY RIGHT

Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.

Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.

In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.

At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.

“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”

RIGHT

In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.

Edward Snowden

Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.

During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.

With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).

Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.

“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”

RIGHT

It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.

In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”

The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.

There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.

“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”

PARTLY RIGHT

As I said in part 2 of this review, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.

From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.

This means the prediction was partly right, but also of questionable significance in the first place.

“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”

MOSTLY RIGHT

The “Deep Dream” computer program made this surrealist portrait.

In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.

“Portrait of Edmond de Belamy”

In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)

Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:

Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).

Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.

Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:

And they have seen the last light fail;
By day they kneel and pray;
But, still they turn and gaze upon
The face of God to-day.

And God is touched and weeps anew
For the lost souls around;
And sorrow turns their pale and blue,
And comfort is not found.

They have not mourned in the world of men,
But their hearts beat fast and sore,
And their eyes are filled with grief again,
And they cease to shed no tear.

And the old men stand at the bridge in tears,
And the old men stand and groan,
And the gaunt grey keepers by the cross
And the spent men hold the crown.

And their eyes are filled with tears,
And their staves are full of woe.
And no light brings them any cheer,
For the Lord of all is dead

In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.

“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”

UNCLEAR

It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?

Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)

If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.

“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”

WRONG

Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.

That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).

Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.

“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”

MOSTLY WRONG

Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.

In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.

Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.

“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.

Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.

Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.

“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”

UNCLEAR

What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.

What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.

A “Phantom” ultralight plane. Is it fair to call this “tiny”?

“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).

With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.

If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.

The RQ-11B Raven drone could be considered a “tiny flying weapon.”

If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.

At 100mm long and 120mm wide along its main rotor, the PD-100 drone is as small as a large dragonfly.

The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.

Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.

“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”

PARTLY RIGHT

The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.

The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.

The cost of sequencing a human genome sharply dropped, making it possible to do genome-wide association studies, and for middle income people to have their personal genomes sequenced.

However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.

Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.

The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.

“The expected life span…[is now] over one hundred.”

WRONG

The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.

“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”

WRONG

Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.

Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.

As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.

Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.

“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”

RIGHT

The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.

Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.

“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”

MOSTLY RIGHT

Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.

The Apple Watch Series 5

By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.

The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).

A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.

Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.

Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.

“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”

RIGHT

In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:

Kurzweil: How old are you?
Eugene: I’m a little boy. 13 years old.

Kurzweil: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!

Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

Kurzweil: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want it, the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!

In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.

“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”

RIGHT

The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:

Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.

“Sophia” the robot

Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.

“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”

RIGHT

Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.

Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.

Well, well, well…that’s it. I have finally come to the end of my project to review Ray Kurzweil’s predictions for 2019. This has been the longest single effort in the history of my blog, and I’m glad the next round of his predictions pertains to 2029, so I can have time to catch my breath. I would say the experience has been great, but like the whole year of 2020, I’m relieved to be able to turn the page and move on.

Happy New Year!

Links:

  1. Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could.
    https://www.bbc.com/news/business-47700701
  2. Andrew Yang made technological unemployment and universal basic income (UBI) major components of his 2020 U.S. Presidential campaign platform.
    https://en.wikipedia.org/wiki/Andrew_Yang#2020_presidential_campaign
  3. An article explaining “acoustic gunshot detection”:
    https://www.eff.org/pages/gunshot-detection
  4. The “ShotSpotter” gunshot detection system was emplaced in over 100 cities in 2019.
    https://www.startribune.com/as-gunfire-continues-in-st-paul-so-does-shotspotter-debate/565382652/
  5. This 2019 article from Dayton shows a correlation between the presence of license plate readers and a decrease in violent crime.
    https://www.daytondailynews.com/news/area-police-look-to-license-plates-readers-as-crime-fighting-tool/ESQLILHQP5HJTCIVJL6IJ6T7VU/
  6. In 2018, a wanted criminal was arrested in China after facial recognition cameras identified him at a concert, out of a crowd of 60,000 people.
    https://www.bbc.com/news/world-asia-china-43751276
  7. Edward Snowden’s key revelations about electronic spying.
    https://mashable.com/2014/06/05/edward-snowden-revelations/
  8. An incomplete list of data hacks that happened in the 2010s. Hundreds of millions of people had important personal data compromised.
    https://www.cnn.com/2019/07/30/tech/biggest-hacks-in-history/index.html
  9. A list of commonly used encrypted messaging apps in 2019.
    https://heimdalsecurity.com/blog/the-best-encrypted-messaging-apps/
  10. In 2018, VPNs were widely used on every continent. Forty-four percent of Indonesian internet users had them.
    https://blog.globalwebindex.com/chart-of-the-day/vpn-usage-2018/
  11. If obesity rates are any indication, people in the 2010s were not too poor to feed themselves.
    https://academic.oup.com/eurpub/article/23/3/464/536242
  12. In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve.
    https://www.factcheck.org/2013/03/bloombergs-obesity-claim/
  13. Several studies done during the 2010s indicated that governments would save money if they gave the homeless free apartments.
    https://www.vox.com/2014/5/30/5764096/homeless-shelter-housing-help-solutions
  14. A 2016 article about Google’s “Deep Dream” program, which can make surreal, artistic images.
    https://www.theguardian.com/artanddesign/2016/mar/28/google-deep-dream-art
  15. A computer-generated painting, “Portrait of Edmond de Belamy,” sold for $423,500 in 2018. Have YOU ever made a painting worth that much money?
    https://edition.cnn.com/style/article/obvious-ai-art-christies-auction-smart-creativity/index.html
  16. “Obvious” is a “collective” of humans and computers that produce accalimed art.
    https://obvious-art.com/page-about-obvious/
  17. “EMMY” is a machine that can write decent instrumental songs.
    https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/
  18. Google’s “Open JukeBox” could even write songs that had simulated human voices singing.
    https://openai.com/blog/jukebox/
  19. Samples of GPT-2’s poetry.
    https://www.gwern.net/GPT-2
  20. Samples of GPT-2’s short news articles and written responses to prompts.
    https://openai.com/blog/better-language-models/
  21. “Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”?
    https://en.wikipedia.org/wiki/Auto-Tune
  22. The virtual reality gaming industry had about $1.2 billion in revenues in 2019.
    https://www.juniperresearch.com/press/press-releases/virtual-reality-games-revenues-reach-8-bn-2023
  23. In 2017, terrorists killed 14,300 people globally.
    https://www.jewishvirtuallibrary.org/statistics-on-incidents-of-terrorism-worldwide
  24. The U.S. spent $16.6 billion on cyberseucrity in FY2019.
    https://www.fedscoop.com/cybersecurity-budget-2020-trump-white-house/
  25. The U.S. military’s “base” defense budget was $726.2 billion in FY2019.
    https://fas.org/sgp/crs/natsec/R44519.pdf
  26. The U.S. spent $33.6 billion on its nuclear forces in FY2019.
    https://www.cbo.gov/system/files/2019-01/54914-NuclearForces.pdf
  27. The “Phantom X1” ultralight plane.
    https://en.wikipedia.org/wiki/Phantom_X1
  28. Data for several “tiny” flying drones in use with the U.S. Navy in 2019.
    https://www.navy.mil/DesktopModules/ArticleCS/Print.aspx?PortalId=1&ModuleId=724&Article=2159299
  29. Data on the U.S. Army’s unmanned drones, including “tiny” ones, from the same period.
    https://fas.org/irp/program/collect/uas-army.pdf
  30. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles.
    https://www.csis.org/analysis/us-military-forces-fy-2020-air-force
  31. We recently discovered how a mutation in the “SF3B1” gene changes intracelluar activity in ways that raise cancer risk.
    https://www.fredhutch.org/en/news/center-news/2019/10/sf3b1-cancer-mutation.html
  32. The Human Genome Project led to major cost improvements to gene sequencing technology, and to the discovery of many disease-associated genes.
    https://unlockinglifescode.org/learn/human-genome-project
  33. We have a better understanding of how cell-level molecular machinery contributes to aging.
    https://pure.au.dk/ws/files/52135662/DemirovicRattanExpGer13.pdf
  34. Official 2018 life expectancy figures for the U.S. and Japan:
    https://www.cdc.gov/nchs/products/databriefs/db355.htm
    https://www.nippon.com/en/features/h00250/life-expectancy-for-japanese-men-and-women-at-new-record-high.html
  35. The 2019 Worldwide Threat Assessment barely mentions biological weapons.
    https://www.dni.gov/files/ODNI/documents/2019-ATA-SFR—SSCI.pdf
  36. Pfizer’s COVID-19 vaccine is the first to incorporate mRNA. The new technology could lead to other vaccines that save millions of lives.
    https://www.wfaa.com/article/news/health/coronavirus/vaccine/what-is-an-mrna-covid-19-vaccine-and-how-does-it-differ-from-other-vaccines/287-240b8181-f13f-47a4-9514-9b6b30988d32
    http://www.rationaloptimist.com/blog/mrna-vaccines-could-revolutionise-medicine/
  37. Several smart watches available in 2019 had ECG monitors.
    https://www.reviewsbreak.com/best-ecg-smartwatch/
    https://www.theverge.com/2018/9/13/17855006/apple-watch-series-4-ekg-fda-approved-vs-cleared-meaning-safe
  38. In 2019, Apple Watches with ECG monitors detected atrial fibrillation events in almost 2,000 people.
    https://news.trust.org/item/20190316134851-5cktc/
  39. The Apple Watch’s “hard fall” detection feature might have already saved the lives of several injured people.
    https://www.nbcnews.com/news/us-news/apple-watch-s-hard-fall-feature-automatically-calls-911-hiker-n1070471
  40. The “HeartGuide” smart watch can monitor blood pressure.
    https://www.medtechdive.com/news/fda-cleared-wearable-blood-pressure-device-hits-market/544908/
  41. The media wrongly declared in 2014 the “Eugene Goostman” had passed the Turing Test.
    https://www.bbc.com/news/technology-27762088
    https://www.kurzweilai.net/mt-notes-on-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test
  42. Google’s “Duplex” AI could masquerade as human for short conversations.
    https://digital.hbs.edu/platform-rctom/submission/google-duplex-does-it-pass-the-turing-test/
  43. The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks.
    https://www.ersj.eu/journal/1245
  44. Facebook’s image recognition feature relied on a massive training set of data prepared by humans.
    https://engineering.fb.com/2018/05/02/ml-applications/advancing-state-of-the-art-image-recognition-with-deep-learning-on-hashtags/

How Ray Kurzweil’s 2019 predictions are faring (pt 3)

This is the third entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My previous entries on this subject can be found here:

Part 1
Part 2

“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”

PARTLY RIGHT

While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”

“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”

WRONG

As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.

Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.

“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”

MOSTLY WRONG

As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.

Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.

A depiction of a human eye’s horizontal field of view.

A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.

If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”

The “Varjo VR-1” virtual reality goggles

No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.

Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.

VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.

“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”

MOSTLY RIGHT

In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.

The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.

“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”

MOSTLY RIGHT

Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.

As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).

“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”

WRONG

Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.

“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”

WRONG

Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.

“Rapid economic expansion and prosperity has continued.”

PARTLY RIGHT

Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?

U.S. real GDP growth rate (year-over-year)

The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.

OECD GDP growth rate from 1998 – 2019

Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.

Global GDP growth rate from 1998 – 2019

And this final chart shows global GDP growth over the same period.

Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.

At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.

While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.

“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”

WRONG

“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.

As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.

“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”

MOSTLY WRONG

The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.

“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”

PARTLY RIGHT

Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.

The iRobot Roomba 960 is a highly rated robot vacuum cleaner.

No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.

“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”

WRONG*

The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.

Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.

*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.

In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.

“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”

UNCLEAR (but probably WRONG)

The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?

A man on a flying skateboard participated in France’s 2019 Bastille Day military parade. The device counts as a “personal flying vehicle,” but it is impractical and very dangerous to use. It can travel about five miles in 10 minutes on one full tank of fuel, and can take off and land almost anywhere. Is it “efficient”?

What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?

But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.

Diagram showing what a “Gurney flap” / “microflap” is.

At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.

“There are very few transportation accidents.”

WRONG

Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.

In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.

“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”

WRONG

As I noted in part 1 of this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.

Aaaaaand that’s it for now. I originally estimated this project to analyze all of Ray Kurzweil’s 2019 predictions could be spread out over three blog entries, but it has taken even more time and effort than I anticipated, and I need one more. Stay tuned, the fourth AND FINAL installment is coming soon!

Links:

  1. A 2018 survey found that most American adults spent an average of 24-41 minutes per day on phone calls. The survey didn’t break that number out into traditional voice-only calls and video calls.
    https://www.zdnet.com/article/americans-spend-far-more-time-on-their-smartphones-than-they-think/
  2. Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations.
    https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/
  3. A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible.
    https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf
  4. The “Varjo VR-1” virtual reality goggles cost $6,000 and can display lifelike images at the centers of their screens.
    https://www.cnet.com/news/the-best-vr-display-ive-ever-seen-varjo-vr-1-costs-6000/
  5. A roundup of the top ten speech-to-speech language translation apps of 2019.
    https://www.daytranslations.com/blog/top-10-free-language-translation-apps/
  6. A 2018 study found that the best English-Mandarin machine translation programs were inferior to professional human translators.
    https://www.technologyreview.com/2018/09/05/140487/human-translators-are-still-on-top-for-now/
  7. The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device.
    https://www.cnet.com/reviews/oculus-go-review/
  8. As this 2019 article makes clear, virtual haptic technology is far less advanced than Kurzweil predicted it would be.
    https://www.scientificamerican.com/article/new-virtual-reality-interface-enables-touch-across-long-distances/
  9. An account of a firsthand experience with cutting-edge (no pun intended) teledildonics in 2018:
    https://www.engadget.com/2018-07-02-flirt4free-teledildonics-long-distance-sex.html
  10. A 2019 analysis shows that the vast majority of transactions in the U.S. are still done face-to-face between humans, but e-commerce’s share is steadily growing.
    https://www.digitalcommerce360.com/article/us-ecommerce-sales/
  11. A roundup of the highest-rated robot vacuum cleaners of 2019:
    https://www.techhive.com/article/3388038/best-robot-vacuums-on-amazon.html
  12. A list of advanced car safety features from 2019:
    https://www.caranddriver.com/features/g27612164/car-safety-features/
  13. Tesla Autopilot is capable of Level 3 autonomous driving. However, out of an abundance of caution (e.g. – just one accident generates enormous bad publicity), the company has installed features that cap it at Level 2.
    https://electrek.co/2019/09/19/tesla-autopilot-v10-commute-without-driver-intervention/
  14. French inventor Franky Zapata designed a flying skateboard called the “Flyboard Air,” and used it to cross the English Channel and wow crowds during the 2019 Bastille Day military parade.
    https://www.theverge.com/2019/8/4/20753648/jet-powered-hoverboard-english-channel-crossing-franky-zapata-success
  15. These World Health Organization reports show that deadly road accidents were about as common in 2016 as they were in 2000. It’s still a leading cause of death.
    https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death
    https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1
  16. The CDC reported that 43,024 people died in the U.S. in 2017 of “Transport accidents.” Only 1,718 of those did not involve road vehicles.
    https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09_tables-508.pdf

Interesting articles, October 2020

‘”I don’t think Britain could have won the Falklands conflict without GCHQ,” Prof Ferris told the BBC. He said because GCHQ was able to intercept and break Argentine messages, British commanders were able to know within hours what orders were being given to their opponents, which offered a major advantage in the battle at sea and in retaking the islands.’
https://www.bbc.com/news/uk-54604895

China makes an “OK” tank for export, called the “VT-4.” I wonder if it will finally replace all the tens of thousands of Cold War-era Soviet tanks still in circulation in the Third World.
https://nationalinterest.org/blog/reboot/will-chinas-vt-4-tank-become-global-export-success-168233

The Indian Air Force has accepted its first few Rafale fighter planes.
https://www.janes.com/defence-news/news-detail/indian-air-force-formally-inducts-first-five-rafale-fighter-aircraft

‘Second, the violence in Ladakh has also allowed Beijing to examine the degree of coordination that exists within the Indo-US strategic partnership. As Indian and Chinese soldiers clashed with medieval-style weapons in the Galwan Valley, Beijing paid close attention to how the United States reacted.’
https://www.9dashline.com/article/india-china-rivalry-towards-a-two-front-war-in-the-himalayas

For the first time, China’s two aircraft carriers operated together for a military exercise.
https://www.globaltimes.cn/content/1200053.shtml

‘This August, for instance, the U.S. nuclear-powered carrier Ronald Reagan cruised in company with the Japan Maritime Self-Defense Force destroyer Ikazuchi in the Philippine Sea. Indeed, Japan’s surface fleet is organized into “escort flotillas” precisely to support U.S.-Japanese combat operations.’
https://nationalinterest.org/blog/buzz/royal-navy-and-us-navy-are-embracing-interchangeability-could-it-backfire-171371

Warships need near-constant maintenance to stay at sea. Keeping the hull from rusting is an ongoing task, along with watching out for and fixing small leaks inside the ship. This means that, even on 100% automated ships, there will need to be mobile robots that can climb all over the outsides and inside spaces to scrub, paint, and dry surfaces. They would also probably have roles doing repairs caused by combat or by accidents. A big difference between “robot crewman” and humans is that the former won’t need much in the way of self-support infrastructure inside the ship: there won’t need to be bathrooms, kitchens, laundries, rec rooms, bunks, mail rooms, etc. The robots would probably spend all their time at their posts, like you spending your whole life at your work desk, never needing to sleep. This means automated ships could be smaller, simpler, and cheaper than manned ships without sacrificing any firepower, speed, or other capabilities. And in spite of considerable design differences, automated ships would still have internal spaces like rooms and hallways. If you went inside, you’d see robots of some kind moving around, doing tasks.
https://www.thedrive.com/the-war-zone/37094/check-out-how-rusty-and-battered-uss-stout-looks-after-spending-a-record-215-days-at-sea

The U.S. Army is spending $39.7 million to buy helicopter “nano-drones” that have heat-vision.
https://www.nationaldefensemagazine.org/articles/2020/6/17/flir-systems-awarded-contract-for-nano-drones

‘The μINS is the world’s smallest sensor module of its kind—approximately the size of 3 stacked US dimes. It provides high-quality direction, position, and velocity data for multiple applications by intelligently fusing sensor data from GPS (GNSS), gyros, accelerometers, magnetometers, and a barometric pressure sensor.’
https://insideunmannedsystems.com/worlds-smallest-better-gps-inertial-navigation-system-now-available/

Atlanta police used a helicopter drone to enter an apartment and arrest a murder suspect. The drone’s footage is here: https://www.news.com.au/national/atlanta-police-use-drone-in-arrest-of-suspect-in-actor-thomas-jefferson-byrds-killing/video/a5c7a96e96110bb77c78d1ec3449ec57

In India, a couple gave birth to a boy who had a fatal genetic defect involving his blood. After learning that a bone marrow transplant could permanently cure him, the couple used IVF to create a second child that would be genetically similar enough to the son to serve as a marrow donor. They didn’t want to have the new child for any reason other than to save the first. They gestated the new child–a daughter–and transplanted some of her bone marrow, curing the son. Additionally, to ensure the daughter didn’t carry the same bone marrow defect that the son had, the couple did genetic testing on her while she was still an embryo. This technique, called “preimplantation genetic diagnosis,” is only one step down from genetic engineering. The ethics of this case are indeed questionable.
https://www.bbc.com/news/world-asia-india-54658007

By looking at a person’s genome, we can now guess their height with +/- 4 cm accuracy.
https://www.biorxiv.org/content/10.1101/190124v1

Genetics might explain why men are both more likely to be homeless and more likely to be rich than women.
https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.25204

‘The successful cloning of DNA collected 40 years is meant to introduce key genetic diversity into the species that could benefit its survival. The zoo said the cloned Przewalski’s horse will eventually be transferred to the San Diego Zoo Safari Park and integrated into a herd of other Przewalski’s horses for breeding.’
https://time.com/5886467/clone-endangered-przewalskis-horse-zoo/

Do all cells have tiny, organic computers in them?
https://arxiv.org/ftp/arxiv/papers/2008/2008.08814.pdf

Because of the twisted ways in which our cells develop at the embryonic stage, the average person’s facial features are slightly shifted to the left side of his head.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6557252/

A computer simulation suggests that geographical differences caused the rise of many ethnicities and small countries in Europe, while a single ethnic group and country grew to encompass the vast area today known as China. Mountains, peninsulas, islands, and deserts are barriers to human movement and settlement.
https://www.youtube.com/watch?v=bbGOXnElJeU

A team of aerospace engineers at KLM flew a small-scale model of an interesting-looking “Model V” plane.
https://www.dailymail.co.uk/sciencetech/article-8705193/Flying-V-aeroplane-model-gets-test-flight.html

China just launched a copy of America’s secret X-37B space plane.
https://www.thedrive.com/the-war-zone/36202/u-s-confirms-china-has-launched-what-could-be-its-version-of-x-37b-spaceplane

U.S. authorities have approved the first small modular reactor for use.
https://arstechnica.com/science/2020/09/first-modular-nuclear-reactor-design-certified-in-the-us/

Solar power is cheaper and has a brighter future (pun) than ever!
https://www.iea.org/reports/world-energy-outlook-2020

On the set of the sci-fi show The Mandalorian, the sets have replaced green screens with gigantic wraparound TV screens that display high-def footage. The footage of them being manipulated by special effects crewmen is trippy. (I’ve predicted devices like this will become common in U.S. households in the 2030s)
https://www.youtube.com/watch?v=Ufp8weYYDE8&feature=emb_title

‘No software is yet producing “Whoa, look at that” [chemical] syntheses. But let’s be honest: most humans aren’t, either.’
https://blogs.sciencemag.org/pipeline/archives/2020/10/20/the-machines-rise-a-bit-more

Scientists are finding new ways to make bulk quantities of the mind-altering chemicals found in magic mushrooms.
http://www.sciencedirect.com/science/article/pii/S109671761930401X

Even more importantly, a guy in British Columbia built a working mech warrior in his backyard.
https://www.cbc.ca/news/canada/british-columbia/giant-mechanized-exoskeleton-now-ready-for-pilot-trainees-1.5710431

Using “deepfake” technology, an app can convert images of clothed women into simulated nude images. I don’t have the app, so I can’t say how convincing the results are, but it will become more refined and will lead to another of my predictions coming true this decade.
https://www.cnet.com/news/a-deepfake-bot-on-telegram-is-violating-women-by-forging-nudes-from-regular-pics/

My prediction: ‘[By 2030] “Deepfake” pornography will reach new levels of sophistication and perversion as it becomes possible to seamlessly graft the heads of real people onto still photos and videos of nude bodies that closely match the physiques of the actual people. New technology for doing this will let amateurs make high-quality deepfakes, meaning any person could be targeted. It will even become possible to wear AR glasses that interpolate nude, virtual bodies over the bodies real people in the wearer’s field of view to provide a sort of fake “X-ray-vision.”’
https://www.militantfuturist.com/my-future-predictions-2020-iteration/

Disney made a wonderful, horrifying android that has human-like eye movements and gazes. (To be fair to Disney, human faces also look frightening without their skin.)
https://www.youtube.com/watch?v=D8_VmWWRJgE

Three months ago, economist Robert Reich made this (totally failed) prediction: “Brace yourself. The wave of evictions and foreclosures in next 2 months will be unlike anything America has experienced since the Great Depression. And unless Congress extends extra unemployment benefits beyond July 31, we’re also going to have unparalleled hunger.”
https://twitter.com/RBReich/status/1277641135368724483

This Icelandic study finds COVID-19 has a 0.3% fatality rate, which is close to estimates from other countries.
http://www.nejm.org/doi/10.1056/NEJMoa2026116

A disease model that has accurately predicted COVID-19 deaths so far now forecasts up to 410,000 U.S. deaths by the end of 2020. Some epidemiologists think it’s too pessimistic.
https://www.npr.org/sections/goatsandsoda/2020/09/04/909783162/new-global-coronavirus-death-forecast-is-chilling-and-controversial

Another British COVID-19 prediction falls flat.
From September 22: “If, and that’s quite a big if, but if that continues unabated and this grows doubling every seven days… if that continued you would end up with something like 50,000 cases in the middle of October per day.”
Reality? In mid-October, Britain is having around 16,000 new cases per day.
https://www.thesun.co.uk/news/12734219/experts-blast-50k-covid-cases-day-october-france-spain/
https://www.bbc.com/news/uk-51768274

…and another.
‘Researchers in Singapore said that there will be no more cases of the deadly bug in the UK by September 30.’
https://www.thesun.co.uk/news/11693720/coronavirus-study-predicts-date-uk-will-have-no-cases/

Mexico’s COVID-19 death count is probably twice as high as originally reported.
https://www.reuters.com/article/us-health-coronavirus-mexico-excessdeath-idUSKBN25X00K

In spite of enormous hype and billions of dollars spent, we still haven’t found drugs that are effective against COVID-19.
https://blogs.sciencemag.org/pipeline/archives/2020/10/27/more-antibody-data
https://blogs.sciencemag.org/pipeline/archives/2020/10/16/the-solidarity-data

How Ray Kurzweil’s 2019 predictions are faring (pt 2)

This is the second entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My first entry on this subject can be found here.

“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”

RIGHT

The Samsung Galaxy Tab S5 is, by any reasonable account, extremely thin and very high resolution, and it weighs ounces. New, it costs less than $500, making it affordable for millions of average people. There are even better tablet computers than this.

The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.

The smartphones of 2019 also meet Kurzweil’s criteria.

“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.

MOSTLY WRONG

A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)

The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.

The bar chart shows yearly sales of paper books in the U.S. Sales declined in the early 2010s due to the debut of e-readers and smartphones, but then they recovered a great deal. Books aren’t dead.

Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.

Sales of “graphic paper” have dropped in rich countries over the last 20 years and will also start dropping in poor countries soon.

Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.

Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”

RIGHT

The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.

Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.

It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.

For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)

The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.

The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.

The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.

Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.

“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”

WRONG*

The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.

However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.

Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.

The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.

“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”

RIGHT

As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.

“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”

MOSTLY RIGHT

First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.

Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote in the first part of this analysis, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”

With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.

However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.

At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.

“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”

WRONG

This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.

In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.

If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.

“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”

PARTLY RIGHT

As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.

Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.

Apps that are made specifically for blind people are not in wide usage among sighted people.

“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”

MOSTLY RIGHT

Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”

The “Argus II” system consists of an electronic device surgically implanted in a person’s retina which receives vision data from externally-worn camera glasses and a data processing unit.

The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.

“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”

MOSTLY WRONG

“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored in part 1 of this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.

A person wearing Vuzix Blade glasses can download the “Zoi Meet” app into the device and have subtitles of spoken words displayed across their field of vision.

However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.

Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.

In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.

“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”

RIGHT

There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.

The “Not Impossible” music suit.

That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.

There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.

“Cochlear and other implants for improving hearing are very effective and are widely used.”

RIGHT

Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.

Cochlear implants provide increasing benefits to users as their technology improves.
Cochlear implant sales have been increasing in the U.S. as more deaf people have the devices installed. Some deaf people fear the technology will make their culture extinct.

Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.

“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”

WRONG

Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.

“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”

WRONG (sadly)

As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.

Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.

Links:

  1. There were fewer than 1 million augmented reality glasses in the world at the end of 2019. https://arinsider.co/2019/09/11/5-million-ar-headsets-by-2023/
  2. Sales of print books in 2017 were not much different from what they probably were in 1999, when the Age of Spiritual Machines was published. https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/75735-sales-of-print-books-increased-slightly-in-2017.html
  3. Sales figures for “graphic paper” prove that, while paper books, newspapers, and office documents are declining, they aren’t “dead” or even “uncommon” yet. https://www.mckinsey.com/industries/paper-forest-products-and-packaging/our-insights/graphic-paper-producers-boosting-resilience-amid-the-covid-19-crisis
  4. The “Internet Archive” has scans of 3.8 million books, and is growing. https://www.pcmag.com/news/the-internet-archive-is-linking-digital-books-to-wikipedia-citations
  5. By late 2019, the U.S. National Archives had put 92 million pages of government documents on its website, free for anyone to view. https://narations.blogs.archives.gov/2019/10/02/naras-record-group-explorer-a-new-path-into-naras-holdings/
  6. The 2020 report COVID-19 on Campus found that most U.S. college students found online instruction an inferior way to learn compared to traditional classroom instruction.
    https://marketplace.collegepulse.com/img/covid19oncampus_ckf_cp_final.pdf
  7. Another 2020 survey of U.S. teenagers found that most of them considered online learning to be less effective than in-person classes.
    https://www.surveymonkey.com/curiosity/common-sense-media-school-reopening/
  8. A 2020 survey of U.S. teachers and school administrators found that student absenteeism rates climbed thanks to the introduction of online classes.
    https://www.edweek.org/ew/articles/2020/10/15/in-person-learning-expands-student-absences-up-teachers.html
  9. A U.S. Census survey found in 2019 that 17% of students didn’t have computers in their homes and 18% had no internet access or very slow service.
    https://apnews.com/article/7f263b8f7d3a43d6be014f860d5e4132
  10. The “Seeing AI” smartphone app uses the device’s camera to recognize text, objects and people and to read, describe, or name them out loud. Blind users have highly reviewed it.
    https://apps.apple.com/us/app/seeing-ai/id999062298#see-all/reviews
  11. The “BlindSquare” smartphone app provides voice-based GPS navigation to users, and is also highly reviewed by blind people.
    https://apps.apple.com/us/app/blindsquare/id500557255#see-all/reviews
  12. The FDA approves the “Argus II” retinal implant system for the blind in 2013.
    https://www.nature.com/news/fda-approves-first-retinal-implant-1.12439
  13. In 2019, an app called “Zoi Meet” was developed for the Vuzix Blade AR glasses. The app produces real-time subtitles of spoken words, displayed across the wearer’s field of vision.
    https://www.vuzix.com/Blog/vuzix-blade-real-time-language-transcription-zoi-meet
  14. In 2019, there were many smartphone apps that helped deaf people to communicate with hearing people.
    https://www.meriahnichols.com/best-deaf-apps/
    https://abilitynet.org.uk/news-blogs/9-useful-apps-people-who-are-deaf-or-have-hearing-loss
  15. “Glide” is a popular video phone app among deaf people.
    https://www.fastcompany.com/3054050/how-video-chat-app-glide-got-deaf-people-talking
  16. “BW Dance” is an app that converts songs into patterns of vibrations that flashing lights that deaf people can experience.
    https://www.producthunt.com/posts/bw-dance
  17. “Not Impossible Labs” makes body suits that allow deaf people to experience music in the form of complex patterns of vibrations.
    https://www.billboard.com/articles/news/8476553/not-impossible-labs-live-music-deaf
  18. Cochlear implants have gotten better and more common among deaf people as time has passed.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111484/
  19. U.S. sales growth of cochlear implants is projected to continue.
    https://www.grandviewresearch.com/industry-analysis/cochlear-implants-industry
  20. Aside from cochlear implants, middle ear implants, auditory brainstem implants, and bone-anchored hearing aids can amplify or restore hearing.
    https://www.bcig.org.uk/cochlear-implant-devices/implantable-devices/
  21. People who are blind, or deaf, or who have serious spinal cord damage are less likely to have jobs and also make less money than people who don’t have those conditions.
    https://www.afb.org/research-and-initiatives/employment/reviewing-disability-employment-research-people-blind-visually
    https://www.nationaldeafcenter.org/news/employment-report-shows-strong-labor-market-passing-deaf-americans
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792457/

Interesting articles, September 2020

More bad news for the once-famed surgeon who made a name for himself transplanting tracheas grown with stem cells into terminally ill people.
https://apnews.com/article/international-news-sweden-bjork-stockholm-paolo-macchiarini-1baeaacd9ad2d19a07acd423d68be3bd

The first person ever cured of HIV just died of cancer. In the end, something will get you…unless maybe you’re an AI with a highly distributed and redundant consciousness.
https://apnews.com/article/berlin-california-archive-palm-springs-67706de65ced0f5bcb7859c34cd51f5a

In heavily inbred families, just “one generation of outbreeding can mask the deleterious alleles immediately.”
https://www.gnxp.com/WordPress/2007/10/17/the-samaritans-it-s-endogamy-not-cousin-marriage-per-se/

Bird brains are radically different from mammalian brains, but produce similar levels of intelligent thought. Bird brains might actually be superior since they are made of smaller, more densely-packed neurons, meaning a bird would be smarter than a mammal whose brain had the same volume. Hundreds of years from now, “humans” might have denser brains and smarter minds thanks to radical genetic engineering that takes inspiration from other organisms.
https://science.sciencemag.org/content/369/6511/1567

In 1991, Joe Biden predicted that “[By the year 2020] I’ll be dead and gone in all probability.”
Three months remain in this year so…
https://youtu.be/i4TuxvhoMs4

Using genetic engineering, scientists were able to transplant sperm from one male farm animal to a sterile male of the same species so that the recipient male produced the same sperm as the donor male. This could make it cheaper and easier to breed prized farm animals by using genetically inferior males as “surrogate fathers” for their offspring, and it could let us resurrect extinct species for which we have frozen sperm samples.
https://www.pnas.org/content/117/39/24195

World-renowned scientist Stephen Wolfram gave a wide-ranging, four-hour interview. I set this up to play at what seemed like a particularly interesting moment, but you should watch it from the beginning.
https://www.youtube.com/watch?v=-t1_ffaFXao&t=2862s

BP released a report containing predictions about the future global energy landscape. Even in their most conservative scenario, global oil consumption for transportation peaks by 2030.
https://www.bp.com/en/global/corporate/news-and-insights/press-releases/bp-energy-outlook-2020.html

Progress is being made building the first, useful nuclear fusion reactor.
https://www.cambridge.org/core/blog/2020/09/29/scientists-present-a-comprehensive-physics-basis-for-a-new-fusion-reactor-design/

There is no known scientific barrier to creating a room-temperature superconductor. The superconductors that we already know of, which only operate at very low ambient temperatures, could work fine in deep space.
https://physics.stackexchange.com/questions/294313/are-room-temperature-superconductors-theoretically-possible-and-through-what-me

A recent experiment with an underwater server farm went well. Cooling costs were much lower because the capsule was immersed in cold seawater, and few of the servers failed because the atmospheric content in the capsule could be controlled better (a pure nitrogen atmosphere helped because oxygen corrodes computer circuits and cables). For this and other reasons, I think intelligent machines might live in the oceans.
https://www.bbc.com/news/technology-54146718

Many common, manmade objects could be made more durable and longer-lasting, for relatively small up-front cost. However, this is rarely done since it goes against the interests of manufacturers, who want consumers to buy replacement goods often. Planned obsolescence is real and pervasive. It’s disturbing to think about how big a share of global economic activity is people buying replacements things that shouldn’t have needed to be thrown out.
https://www.youtube.com/watch?v=zdh7_PA8GZU

The human backup driver was found criminally responsible for the infamous 2018 crash of a self-driving car that killed a homeless woman.
https://www.bbc.com/news/technology-54175359

‘“Inertial navigation was perhaps the pinnacle of mechanical engineering and among the most complicated objects ever manufactured”…But in the 1990s these were superseded by micro-electromechanical systems (MEMS)—chips with vibrating mechanical structures that detect angular motion. MEMS technology is cheap and ubiquitous (it is used in car airbags and toy drones). That makes it hard to restrict by way of military-export controls.’
https://www.economist.com/science-and-technology/2020/01/16/irans-attack-on-iraq-shows-how-precise-missiles-have-become

Here’s one of those old inertial navigation units, used to guide U.S. nuclear missiles.
https://www.thedrive.com/the-war-zone/30254/this-isnt-a-sci-fi-prop-its-a-doomsday-navigator-for-americas-biggest-cold-war-icbm

“Center Barrel Replacement Plus” is a maintenance practice in which an F/A-18 fighter plane has the middle section of its fuselage cut out and replaced with a new section. The aircraft’s wings and landing gear are attached to the “center barrel,” so the joints there wear out faster than any other part of the plane. One of the improvements incorporated in the more advanced F/A-18 Super Hornet is a modular fuselage. This allows maintenance crews to replace center barrels with greater speed and ease.
https://www.thedrive.com/the-war-zone/36435/the-plan-for-making-aging-marine-corps-hornets-deadlier-than-ever-for-a-final-decade-of-service
https://www.youtube.com/watch?v=Y5hax06xClQ

A electromagnetic aircraft launch catapult lets an aircraft carrier launch 12.5% more planes during combat than a carrier with an older steam-powered catapult.
https://nationalinterest.org/blog/buzz/emals-how-us-navy-aircraft-carriers-will-sail-future-and-dominate-169046

China’s third aircraft carrier will be larger and more advanced than its previous two, and might have an electromagnetic catapult.
https://nationalinterest.org/blog/buzz/why-chinas-third-aircraft-carrier-might-be-supercarrier-after-all-168986

And the worst “aircraft carriers” ever were the CAM Ships of WWII. The planes were violently catapulted/rocketed into the air, did their thing, and were then expected to crash land in the water next to a friendly ship, whereupon the pilot would be rescued.
https://en.wikipedia.org/w/index.php?title=CAM_ship&oldid=961354276

The U.S. Army has finally applied camouflage patterning to all the straps and belts on its infantry kits. Looks like all that’s left to do is to camouflage the Velcro patches. It’s not the biggest deal to have a big, solid green rectangle in the middle of your camouflaged shirt, but how hard would it be to fix it?
https://www.armytimes.com/news/your-army/2019/03/05/this-unit-will-be-the-first-to-get-the-armys-newest-helmet-body-armor-kit/

The Congressional Budget Office predicts the pandemic’s human and economic impact will be felt for decades. Declining birthrates and higher mortality will lead to the U.S. population being 11 million people smaller in 2050 than it otherwise would have been.
https://www.cbo.gov/publication/56598

Bad news: The U.S. just had its 200,000th COVID-19 death.
Worse news: That means the University of Washington disease model has proved itself highly accurate once again: On June 16, the Model predicted the U.S. would hit the 200,000 milestone by October 1. It now says we’ll hit the 300,000 mark by December 10, and if we’re unlucky/incompetent, we could surpass 400,000 by January 1.
https://covid19.healthdata.org/united-states-of-america?view=total-deaths&tab=trend
https://apnews.com/article/virus-outbreak-huntsville-alabama-us-news-public-health-a05360a9df7e19f9bee83f520deada1c

On June 11, Dr. Ashish Jha correctly predicted the U.S. would have its 200,000th death “sometime in September.” He now predicts a COVID-19 vaccine won’t be widely available to Americans until next spring (second link).
https://www.today.com/video/-we-will-cross-the-200-000-mark-in-coronavirus-deaths-by-september-doctor-says-84871749877
https://www.boston.com/news/coronavirus/2020/09/17/ashish-jha-trump-disputes-cdc-director-vaccine-timeline

How Ray Kurzweil’s 2019 predictions are faring (pt 1)

In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.

Ray Kurzweil receiving a technology award from President Clinton in 1999.

Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.

I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.

With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem by breaking it into smaller, manageable chunks: My analysis of Kurzweil’s 2019 predictions from The Age of Spiritual Machines will be spread out over three blog entries, the first of which you’re now reading. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.

“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”

RIGHT

A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.

“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”

MOSTLY WRONG

The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).

Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.

Pokemon Go is an augmented reality video game, and has been downloaded over 1 billion times.

However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.

So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.

The augmented reality glasses will also let you experience virtual reality.

WRONG

Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.

Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.

The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.

PARTLY RIGHT

The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.

“Magic Leap One”

The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).

Microsoft “Hololens”

Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.

The “Vuzix Blade”

The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.

It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.

A man wearing HTC Vive virtual reality goggles, with hand controllers.

And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.

“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”

MOSTLY RIGHT

Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.

By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.

The Bose Frames sunglasses have small sound speakers built into them, close to the wearer’s ears.

Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.

The Apple Airpod wireless earbuds are, like most Apple products, status objects like jewelry.

Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”

Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.

The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.

“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “

MOSTLY WRONG

Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.

Gartner’s estimates of global personal computer (PC) sales in 2018. The numbers for 2019 will be nearly the same.

The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.

The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.

If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?

You see where this is going…

Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.

Virtual reality game setups, like those offered by Oculus, commonly make use of hand controllers like these, which monitor the locations and movements of the player’s hands and translate them into in-game commands. This is an example of gestural control. Several million people now have advanced VR game systems like this.

Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.

Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.

Users interface with the “Gen 2” Amazon Echo through two-way spoken communication. The device is popular and highly reviewed and only costs $100, putting it within reach of hundreds of millions of households.

“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.

The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.

“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”

WRONG

The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.

My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.

“Typically, people do not own just one specific ‘personal computer’…”

RIGHT

A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).

The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.

“Computing and extremely high-bandwidth communication are embedded everywhere.”

MOSTLY RIGHT

This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?

First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”

Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere'” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.

Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.

Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.

In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.

Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”

Cables have largely disappeared.”

MOSTLY RIGHT

Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.

That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.

Wireless keyboards and mice are affordable, but still significantly more expensive than their wired counterparts.

On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.

No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.

And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.

This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.

“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]

WRONG

Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).

A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.

$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.

6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.

This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.

The SuperMUC-NG supercomputer fills a large room and is as powerful as one human brain.

Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.

I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.

“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”

WRONG

Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.

Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.

Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.

However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.

Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!

Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.

What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.

3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.

Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.

“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”

WRONG

For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.

I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.

The HP Pavilion 590-p0033w has a rotating hard disk drive, indicated by the “7200 RPM” (revolutions per minute) speed figure on the front of this box. It also says it has a “DVD-Writer.” This is a newly manufactured machine, and at $499, is a mid-ranged desktop.
The HP Slim Desktop 290-p0043w also has a rotating hard disk drive, with a 7200 RPM speed.
And before anyone says “Well, only the clunky, old-fashioned desktops still have rotating disk drives!” check out this low-end (but newly manufactured) laptop I also found at Wal-Mart. The HP 15-bs212wm has a rotating hard disk drive and a DVD drive.

If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.

“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”

MOSTLY WRONG

Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.

An internal diagram of a common MicroSD card, which has the simple job of storing data. It has about 18 layers. Memory storage chips are less sensitive to manufacturing defects since they have redundancy.
An exploded diagram of Intel’s upcoming “Lakefield” processor, which has the complex job of storing and processing data. It has four layers, and is much more technically challenging to make than a 3D memory chip.

In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.

Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.

It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.

“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”

UNCLEAR

Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)

Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.

Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.

“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”

PARTLY RIGHT

The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.

Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).

If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.

“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”

RIGHT

This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”

The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.

I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.

Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.

“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”

WRONG

Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.

Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.

A teardown of a Samsung Galaxy S10 smartphone reveals its three digital cameras, which produce very high-quality photos and videos. Comparing them to the tweezers and human fingers, it’s clear they are only as big as small coins.

Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.

This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”

“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”

WRONG

A state-of-the-art microscopic machine invented in 2019 can move around in water by twirling its four “flippers.”

While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.

In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.

Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.

Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”

Whew! That’s it for now. I’ll try to publish PART 2 of this analysis next month. Until then, please share this blog entry with any friends who might be interested. And if you have any comments or recommendations about how I’ve done my analysis, feel free to comment.

Links:

  1. Ray Kurzweil’s self-analysis of how accurate his 2009 predictions were: https://kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf
  2. The inventor of the first augmented reality contact lenses predicted in 2015 that commercially viable versions of the devices wouldn’t exist for at least 20 more years. (https://www.inverse.com/article/31034-augmented-reality-contact-lenses)
  3. In late 2019, a Magic Leap One cost $2,300 – $3,300 and a Hololens was $3,000. https://www.cnn.com/2019/12/10/tech/magic-leap-ar-for-companies/index.html
  4. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800. (https://www.theverge.com/2019/5/16/18625238/vr-virtual-reality-headsets-oculus-quest-valve-index-htc-vive-nintendo-labo-vr-2019)
  5. In 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs. Keyboards aren’t dead.
    (https://venturebeat.com/2019/01/10/gartner-and-idc-hp-and-lenovo-shipped-the-most-pcs-in-2018-but-total-numbers-fell/)
  6. Survey data from 2018 about the global usage of “digital personal assistants.” Users speak to their smartphones or smart speakers, mostly to obtain simple information (like weather forecasts) or to have their computers do simple tasks. (https://www.business2community.com/infographics/the-growth-in-usage-of-virtual-digital-assistants-infographic-02056086)
  7. 2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones. (https://www.pewresearch.org/internet/fact-sheet/mobile/)
  8. A 2015 American Community Survey revealed that households headed by people over 64 were the least likely to have smartphones, PCs, or internet access. (https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-37.pdf)
  9. In 2000, 34% of Americans accessed the internet through dial-up modems, and only 3% did so through “broadband” (a catch-all for cable, DSL, and satellite access). Most U.S. internet users were still using dial-up modems that were at most 56k. The remaining 63% didn’t access it at all. (http://thetechnews.com/2016/01/03/usa-getting-faster-internet-speeds-but-not-at-the-pace-others-are/)
  10. In 2019, a mid-tier internet service plan in the U.S. granted users download speeds of 30 – 60 Mbps. (https://www.pcmag.com/news/state-by-state-the-fastest-and-slowest-us-internet)
  11. 2019 U.S. mobile phone network average speeds were 33.88 Mbps for downloads and 9.75 Mbps for uploads (https://www.speedtest.net/reports/united-states/ )
  12. The Black Friday 2019 circular for Newegg.com featured five models of printers for sale. Only one of them, the Brother HL-L2300D, wasn’t WiFi-capable. (https://bestblackfriday.com/ads/newegg-black-friday/page-12#ad_view)
  13. Gartner figures for global computer sales in 2015, 2016, 2017, 2018 and 2019.
    (https://www.gartner.com/en/newsroom/press-releases/2017-01-11-gartner-says-2016-marked-fifth-consecutive-year-of-worldwide-pc-shipment-decline)
    (https://venturebeat.com/2018/01/11/gartner-and-idc-agree-hp-shipped-the-most-pcs-in-2017/)
    (https://www.gartner.com/en/newsroom/press-releases/2020-01-13-gartner-says-worldwide-pc-shipments-grew-2-point-3-percent-in-4q19-and-point-6-percent-for-the-year)
  14. Intel’s i7 Generation 8 processor is capable of 361.3 gigaflop speeds. (https://www.pugetsystems.com/labs/hpc/Skylake-X-7800X-vs-Coffee-Lake-8700K-for-compute-AVX512-vs-AVX2-Linpack-benchmark-1068/)
  15. 3.2 billion people owned a smartphone in 2019. (https://newzoo.com/insights/trend-reports/newzoo-global-mobile-market-report-2019-light-version/)
  16. In 2019, 3D chips were common in memory storage devices, like MicroSD cards. 3D NAND chips had up to 64 layers. (https://semiengineering.com/what-happened-to-nanoimprint-litho/)
  17. In 2019, Intel was still working the kinks out of its first 3D computer processor, called “Lakefield,” and it wasn’t ready for commercial sales. (https://www.overclock3d.net/news/cpu_mainboard/intel_details_their_lakefield_processor_design_and_foveros_3d_packaging_tech/1)
  18. In 2019, computer circuits made of carbon nanotubules were still stuck in research labs, and held back from commercialization by many unsolved problems relating to cost of manufacture and reliability. Silicon was still the dominant computing substrate. (https://www.sciencenews.org/article/chip-carbon-nanotubes-not-silicon-marks-computing-milestone)
  19. “Compute cycle” has three meanings: #1 (https://www.zdnet.com/article/how-much-is-a-unit-of-cloud-computing/), #2 (https://www.quora.com/What-is-a-Compute-cycle) and #3 (https://www.computerhope.com/jargon/c/compute.htm)
  20. In a 2019 experiment, researchers were able to decode the words a person was speaking by studying their brain activity. (https://www.biorxiv.org/content/10.1101/350124v2)
  21. “The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.”  –Marvin Minsky, 2013 (https://youtu.be/3PdxQbOvAlI)
  22. “Today’s neural nets use algorithms that were essentially developed in the early 1980s.” (https://futurism.com/cmu-brain-research-grant
  23. The inventor of “back-propagation,” which spawned many computer algorithms central to AI research, now believes it will never lead to true intelligence, and that the human brain doesn’t use it. (https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html)
  24. Henry Markram’s project to create a human brain simulation by 2019 failed. (https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/)
  25. “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.” –Yann LeCun, 2017 (https://www.theverge.com/2017/10/26/16552056/a-intelligence-terminator-facebook-yann-lecun-interview)
  26. Machine neural networks are similar to human brains in key ways. (https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414)
  27. Some machine neural nets use genetic algorithms. (https://blog.coast.ai/lets-evolve-a-neural-network-with-a-genetic-algorithm-code-included-8809bece164)
  28. Quantum imaging is a real thing. However, devices that can make use of it are still experimental. (https://onlinelibrary.wiley.com/doi/full/10.1002/lpor.201900097)
  29. The Samsung Galaxy S10 is an upper-end smartphone released in 2019. It has three digital cameras, all of which operate on the same technology principles as the digital cameras of 1999. (https://www.digitalcameraworld.com/reviews/samsung-galaxy-s10-camera-review)
  30. The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines. (https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines)
  31. Dr. Marc Miskin’s micromachines from 2019 are interesting, but a far cry from what Kurzweil thought we’d have by then. (https://www.inquirer.com/health/micro-robots-upenn-cornell-20190307.html)