Interesting articles, May 2021

Israel’s “Iron Dome” system went into action during this month’s fighting with insurgents in Gaza. The remarkable photo above shows the cutting-edge Israel interceptor missiles at left, and the crude, vastly cheaper Palestinian rockets at right. It depicts the essence of “asymmetrical warfare.”
https://www.bbc.com/news/world-middle-east-20385306

Israel has developed a putrid-smelling liquid called “skunk water,” which it sprays out of trucks to disperse Palestinian rioters.
https://www.bbc.com/news/magazine-34227609

The U.S. experimented with a helicopter-carrying submarine. It didn’t work out.
https://www.thedrive.com/the-war-zone/40580/uss-sealion-was-the-navys-unique-helicopter-accomadating-submarine

Here are the world’s worst aircraft carriers.
https://nationalinterest.org/blog/buzz/aircraft-carriers-didnt-make-it%E2%80%94and-how-they-influenced-ones-did-184495

The U.S. Army is adopting incredible new night vision goggles, named (in characteristically dry fashion) the Enhanced Night Vision Goggle-Binocular (ENVG-B). It uses ambient light amplification and thermal imaging to generate colorful, composite images that are much more detailed than the monochrome green images we’re used to.
https://gizmodo.com/the-armys-new-night-vision-goggles-look-like-technology-1846799718

One thing blocking laser weapons is their incredible inefficiency: using a laser to burn a cylinder-shaped hole through a human body requires literally 1,000 times more energy than shooting a bullet of the same diameter through them.
https://www.youtube.com/watch?v=EXqOHk1LgD8

3D, moving holograms like those shown in many sci-fi films, are a long way off. The best we can manage are tiny holograms that are hazardous to use (note the scientists in the video wearing goggles to protect their eyes from the hologram’s laser projector).
https://youtu.be/N12i_FaHvOU

A machine just won the world’s premier crossword puzzle championship, meaning it is probably better than the best human.
https://www.bbc.com/news/technology-56934716

“Clearly AI is going to win [against human intelligence]. It’s not even close.”
–Daniel Kahneman, Nobel Prizewinner for psychology and behavioral economics

‘Before they outlive us by eons, our avatar might be a friend and a companion that really “gets us.” While they’re at our side, they’ll continually learn about us from us. They’ll be a sounding board on personal and professional matters and will always be prepared to fill in for us in a variety of situations as needed.

And that’s where things can get a little disturbing. Will our avatar be the idealized us or just more of the same? Will the avatar of a criminal also be a criminal? Will there be armies of avatars? Will we compete with our avatar for the love and attention of others? Will they become too much like us – with weak moments and ulterior motives? Can we program our avatar for good?’
https://futuristspeaker.com/artificial-intelligence/digital-ai-avatars-of-ourselves/

Is California ready for the “ArkStorm”?
‘A severe California winter storm could realistically flood thousands of square miles of urban and agricultural land, result in thousands of landslides, disrupt lifelines throughout the state for days or weeks, and cost on the order of $725 billion.’
https://www.usgs.gov/natural-hazards/science-application-risk-reduction/science/arkstorm-scenario?qt-science_center_objects=0#qt-science_center_objects

Though it is in chilly northern England, fig trees suited for the Mediterranean grow on the Don river’s banks. Why? Because nearby factories divert the river water to cool their machines, then dump it back in, raising the Don’s overall temperature. This is a tiny example of the waste heat problem that could someday pose a serious threat to Earth.
https://ianswalkonthewildside.wordpress.com/2016/01/08/river-don-fig-forest/

We could use genetically engineered plants and fungi in the future to clean up wastes in the soil and to extract trace amounts of valuable minerals and elements.
https://en.wikipedia.org/wiki/Biomining

There are about 50 billion wild birds.
https://www.bbc.com/news/science-environment-57150571

‘The intertwined story of these three characters—the sea slug E. rufescens, marine algae of the genus Bryopsis, and the newly identified bacteria—form a three-way symbiotic relationship. A symbiotic relationship is one in which several organisms closely interact. In this example, the slug gets food and defensive chemicals, the algae get chemicals, and the bacteria get a home and free meals for life in the form of nutrients from their algae host.’
https://phys.org/news/2019-06-sea-slugs-algae-bacterial-weapons.html

“Somewhere around 5 to 20% of our genomic DNA appears to be detritus from ancient retroviruses.”
Have our bodies repurposed this genetic material to serve useful functions, or it is all “dead weight” that saps energy from our cells? Would people who deleted all their useless DNA have genomes that were so much shorter they wouldn’t count as Homo sapiens anymore, even though they were identical to regular humans at the macro-level?
https://blogs.sciencemag.org/pipeline/archives/2021/05/10/integration-into-the-human-genome

‘From a technical perspective, cloning humans and other primates is more difficult than in other mammals. One reason is that two proteins essential to cell division, known as spindle proteins, are located very close to the chromosomes in primate eggs. Consequently, removal of the egg’s nucleus to make room for the donor nucleus also removes the spindle proteins, interfering with cell division. In other mammals, such as cats, rabbits and mice, the two spindle proteins are spread throughout the egg. So, removal of the egg’s nucleus does not result in loss of spindle proteins.’
https://www.genome.gov/about-genomics/fact-sheets/Cloning-Fact-Sheet

After 26 years of FDA safety approval delays, genetically engineered salmon can now be sold to people in the U.S.
https://reason.com/2021/05/14/after-26-years-of-fda-delays-u-s-consumers-can-finally-buy-genetically-enhanced-aquabounty-salmon/

The is a VR headset that lets you see the world through the eyes of someone high on magic mushrooms.
https://www.nature.com/articles/s41598-017-16316-2

Here’s a flashback to the 2012 Consumer Electronics Show featuring a groundbreaking 55 inch OLED TV. They’re now becoming common.
https://youtu.be/sa87ZQtj3ag

Another of my future predictions (first made in 2019) is getting close to coming true. My prediction:
‘[During the 2030s], movie subtitles and the very notion of there being “foreign language films” will become obsolete. Computers will be able to perfectly translate any human language into another, to create perfect digital imitations of any human voice, and to automatically apply CGI so that the mouth movements of people in video footage matches the translated words they’re speaking.’
The article says a British company called “Flawless” is using deepfake technology to do what I predicted. The sample footage is “fair” quality, and the CGI mouth movements don’t look totally real, but of course it will improve with time.
https://www.wired.com/story/ai-makes-de-niro-perform-lines-flawless-german/

Wikipedia’s “List of emerging technologies” is an interesting read.
https://en.wikipedia.org/wiki/List_of_emerging_technologies

Electric engines are much lighter than equivalent gas-powered engines, but electric cars are much heavier because they need huge batteries. If the energy-density of batteries improved by about 40%, which could happen, then the vehicles would weigh the same.
https://www.quora.com/How-much-would-an-average-electric-car-weigh-as-opposed-to-a-comparable-gasoline-powered-car

‘Instead of trying to max out every cubic meter of the hall, [the Takaoka II car factory] more or less ignores the 3rd dimension. Everything happens on one flat plane. There are no overhead gantries, and because nothing happens above, there are no height restrictions for the cars made on the shop floor. There is a lot of those two dimensions in the back of the giant, but simple hall Takaoka II occupies: Half of its space sits empty, breathing space for the flexible lines. The super-flexible “Takaoka II could theoretically build any number of models on the same line,” tells me Akahane, “but it probably would stop making sense at six.”’
https://www.thedrive.com/tech/26955/inside-toyotas-takaoka-2-line-the-most-flexible-line-in-the-world

‘The Heavy Press Program was a Cold War-era program of the United States Air Force to build the largest forging presses and extrusion presses in the world. These machines greatly enhanced the US defense industry’s capacity to forge large complex components out of light alloys, such as magnesium and aluminium.’
https://en.wikipedia.org/wiki/Heavy_Press_Program

‘But TSMC’s vice president of corporate research, Dr. Philip Wong, was keen to point out that after introducing his company’s latest node, despite a history of the node naming scheme actually having some relevance to the silicon features etched into the wafer, the node names are now effectively meaningless. So, while we might like to think that the N7, N5, and N3 names it’s using for its 7nm, 5nm, and 3nm nodes relate to the gate length of transistors, they’re effectively just brand names.’
https://www.pcgamesn.com/amd/tsmc-7nm-5nm-and-3nm-are-just-numbers

The DC police department and a major U.S. oil pipeline were hacked and the stolen data ransomed.
https://apnews.com/article/police-technology-government-and-politics-53e54780aa080decbb78d5b88d4ff44b
https://apnews.com/article/europe-government-and-politics-technology-business-938b33938fe3a750367fb1dc2f7ce6e0

This video explains why the oil pipeline hack was so disruptive.
https://www.youtube.com/watch?v=rBPud5PyySk

Interesting, though this is obviously a more expensive way to make objects. You might be able to make five standard, mass-produced, sub-optimal chairs for the same cost (in terms of money and time) as one customized, optimized chair. Spare parts availability is another problem.
https://futurism.com/generative-design-could-radically-transform-the-look-of-our-world

A second fighter pilot who saw the UFO during the 2004 USS Nimitz incident has come forward, and confirmed the first pilot’s public account.
https://www.youtube.com/watch?v=ZBtMbBPzqHY

Reports have emerged of another encounter between the U.S. military and UFOs. This one happened in July 2019, and involved strange aircraft flying near U.S. warships during training missions. The aircraft were detected on radar, and by thermal sensors and night vision cameras.
https://twitter.com/TODAYshow/status/1398262582599815172
https://www.mysterywire.com/ufo/uss-omaha-ufo-video/

Some of the recent UFO sightings by U.S. military people might have been of Russian and Chinese unmanned spy drones that were purposefully made to look weird. The two countries are aware of the U.S. government’s strong aversion to ever talking about or even investigating possible alien spacecraft sightings, so they built expendable spy balloons and spy drones that look strange and have weird radar and thermal signatures, and have been launching them off the East and West coasts to surveil our military forces during routine exercises.
https://www.thedrive.com/the-war-zone/40054/adversary-drones-are-spying-on-the-u-s-and-the-pentagon-acts-like-theyre-ufos

Here are some very exotic notions of what aliens might be like. Machine aliens and aliens that use DNA that is chemically different from ours are the tamer hypotheses.
https://listverse.com/2015/07/17/10-hypothetical-forms-of-life/

It might be possible to “blow up” Jupiter by detonating a nuclear weapon in the layer of its atmosphere that is rich in deuterium. The resulting explosion would release thousands of times more energy than the Sun, obliterating whichever side of the Earth was facing the planet at that moment.
https://physics.stackexchange.com/questions/34573/what-would-be-the-characteristics-of-jupiter-if-it-shrank

The first human went into space 60 years ago.
https://apnews.com/article/spacex-lifestyle-travel-apollo-11-moon-landing-business-cbe5e6b34422af6a80ae92fe084981be

SpaceX’s new, reusable “Starship” rocket made its first successful test flight. It could be used to send astronauts back to the Moon and possibly to Mars.
https://www.bbc.com/news/science-environment-57004604

Virgin Galactic’s “Unity” space plane made a successful test flight, reaching an altitude of 55 miles. For comparison, passenger planes typically fly at 5 or 6 miles, and the International Space Station orbits at an altitude of 254 miles. Unity could start ferrying tourists to the edge of space as early as next year.
https://www.bbc.com/news/science-environment-57214988

China landed its first rover on Mars, becoming only the second country to do so.
https://apnews.com/article/china-technology-business-science-e1c1d0679aa78a8cc79c04a4d1375322

The threat of Earth being encircled by “space junk” that prevents us from going into space anymore is exaggerated and ultimately a solvable problem. Most of the debris in orbit falls back to Earth in a matter of decades.
https://www.nasa.gov/news/debris_faq.html

“Energy is limited here. In at least a few hundred years … all of our heavy industry will be moved off-planet,” Bezos added.
https://www.vox.com/2016/6/1/11826514/jeff-bezos-space-save-earth

Using a giant solar panel floating in space, we could capture enough energy from the Sun in a year to manufacture a tiny black hole. Its Hawking Radiation emissions could then be harnessed to power a space ship. The artificial black hole would have a diameter measured in quintillionths of a meters. The smallest known naturally occurring black hole, by contrast, is ten miles wide. The hypothetical, manmade black hole would still have a mass of 1 – 6 million tons. A fully loaded U.S. aircraft carrier weighs about 100,000 tons.
https://arxiv.org/pdf/0908.1803.pdf

We’d save a lot of money if we spread out of electricity consumption more evenly over each day. The big spikes in demand each morning and evening when people wake up and get home from work, respectively, as well as surges caused by unexpected events at other times, strain the electric grid and force it to use expensive energy sources in those circumstances. “Virtual batteries” could be part of the solution.
https://www.youtube.com/watch?v=Oke45rH4QgU

Worldwide, 463 million people age 25-64 have diabetes. If people 65+ are included, then the number could easily exceed half a billion. The vast majority of the afflicted have Type 2 diabetes, which is a preventable disorder that only arises after many years of poor lifestyle choices (overeating, bad diet, lack of exercise).
https://www.thelancet.com/journals/lanhl/article/PIIS2666-7568(21)00089-1/fulltext

Yes, it’s possible to work so hard that you give yourself a heart attack. Or a stroke.
This probably explains part of why men die sooner, as they are much likelier to work extreme amounts (55+ hours a week).
https://www.sciencedirect.com/science/article/pii/S0160412021002208

The FDA just approved the first brain-computer interface medical device, meant to help stoke victims recover use of their paralyzed hands.
https://newatlas.com/medical/first-fda-approved-brain-computer-interface-ipsihand-stroke/

Joe Biden wants to waive patent protections on the new COVID-19 vaccines so factories in other countries can make them without paying royalties. It’s a bad idea.
https://blogs.sciencemag.org/pipeline/archives/2021/05/06/waiving-ip

The September prediction from the WHO was right: “We are really not expecting to see widespread vaccination until the middle of next year.”
Only in the last week did the U.S. vaccinate the first half of its population against COVID-19. Some poorer countries have only vaccinated 1% of their people so far.
https://medicalxpress.com/news/2020-09-widespread-coronavirus-vaccination-mid-.html

Bill Gates predicts the COVID-19 pandemic will be over by the end of 2022.
https://www.reuters.com/article/us-health-coronavirus-billgates-idUSKBN2BH0SX

From January: “And, very sadly, if you do the math, we could be looking at 800,000 to 1 million dead Americans by the beginning of May.”
Actual U.S. COVID-19 death toll as of May 4: 574,000
https://www.advocate.com/commentary/2021/1/18/how-bad-will-covid-19-get

And this prediction from the fine minds at J.P Morgan was also wrong.
https://www.barrons.com/articles/the-pandemic-could-be-effectively-over-by-april-j-p-morgan-says-heres-why-51613163599

COVID-19 is pounding India. They can’t dispose of the dead bodies fast enough.
https://www.bbc.com/news/world-asia-india-56897970

COVID-19 has killed between 7 and 13 million people worldwide.
https://www.economist.com/briefing/2021/05/15/there-have-been-7m-13m-excess-deaths-worldwide-during-the-pandemic

Calls are growing to investigate whether COVID-19 was manmade.
https://www.foxnews.com/politics/fauci-not-convinced-covid-19-developed-naturally
https://science.sciencemag.org/content/372/6543/694.1
https://foreignpolicy.com/2021/05/27/china-lab-wuhan-coronavirus-covid-biden/

The Kurzweil predictions that don’t matter

Time for…another Ray Kurzweil analysis. It’s funny how I keep swearing to myself I won’t write another one about him, but end up doing so anyway. I’m sorry. For sure, there won’t be anything more about him until next year or later.

In my last blog post, “Will Kurzweil’s 2019 be our 2029?”, I mentioned that several of his predictions for 2019 were wrong, and would probably still be wrong in 2029, but that it didn’t matter since they pertained to inconsequential things. Rather than leave all two of you who read my blog hanging in suspense, I’d like to go over those and explain my thoughts. As before, these predictions are taken from Kurzweil’s 1998 book The Age of Spiritual Machines.

The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.

To be clear, by 2030, standalone AR and VR eyewear will have the levels of capability Kurzweil envisioned for 2019. However, it’s unknowable whether retinal projection will be the dominant technology they will use to show images to the people wearing them. Other technologies like lenses made of transparent LCD screens, or beamed images onto semitransparent lenses, could end up dominant. Whichever gains the most traction by 2030 is irrelevant to the consumer–they will only care about how smooth and convincing the digital images displays in front of them look.

“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication.”

The first sentence was wrong in 2019 and still will be in 2029. As old-fashioned as they may be, keyboards have many advantages over other modes of interacting with computers:

  • Keyboards are physically large and have big buttons, meaning you’re less likely to push the wrong one than you are on a tiny smartphone keyboard.
  • They have many keys corresponding not only to letters and numbers, but to functions, meaning you can easily use a basic keyboard to input a vast range of text and commands into a computer. Imagine how inefficient it would be to input a long URL into a browser toolbar or to write computer code if you had to open all kinds of side menus on your input device to find and select every written symbol, including colons, semicolons, and dollar symbols. Worse, imagine doing that using “hand gestures” and “facial expressions.”
  • Keyboards are also very ergonomic to use and require nothing more than tiny finger movements and flexions of the wrists. By contrast, inputting characters and commands into your computer through some combination of body movements, gestures and facial expressions that it would see would take you much more time and physical energy (compare the amount of energy it takes you to push the “A” button on your keyboard with how much it takes to raise both of your arms up and link your hands over your head with your elbows bent to turn your body into something resembling an “A” shape). And you’d have to go to extra trouble to make sure the device’s camera had a full view of your body and that you were properly lit. This is why something like the gestural interface Tom Cruise used in Minority Report will never become common.

Furthermore, two-way voice communication with computers has its place, but won’t replace keyboards. First, talking with machines sacrifices your privacy and annoys the people within earshot of you. Imagine a world where keyboards are banned and people must issue voice commands to their computers when searching for pornography, and where workers in open-concept offices have to dictate all their emails. Second, verbal communication works poorly in noisy environments since you and your machine have problems understanding each other. It’s simply not a substitute for using keyboards.

Even verbal communication plus gestures, facial expressions, and anything else won’t be enough to render keyboards obsolete. If you want to get any kind of serious work done, you need one.

This will still hold true in 2029, and keyboards will not be “rare” then, or even in 2079. Kurzweil will still be wrong. But so what? The keyboard won’t be “blocking” any other technology, and given its advantages over other modes of data and command input, its continued use is unavoidable and necessary.

Let me conclude this section by saying I can only imagine keyboards becoming obsolete in exotic future scenarios. For example, in a space ship crewed entirely by robots, keyboards, mice, and even display screens might be absent since the robots and the ship would be able to directly communicate through electronic signals. If the captain wanted to turn left, it would think the command, and the ship’s sensors would receive it and respond. And in his mind’s eye, the captain would see live footage from external ship cameras.

“Cables have largely disappeared.”

As I wrote in the analysis, it will still be common for control devices and peripheral devices to have data cables in 2029 due to better information security and slightly lower costs. Moreover, in many cases there will be no functional disadvantage to having corded devices, as they never need to leave the vicinity of whatever they are connected to. Consider, if you have a PC at your work desk, why would you ever need to move your keyboard to anyplace other than the desk’s surface? To use your computer, you need to be close to it and the monitor, which means the keyboard has to stay close to them as well. In such a case, a keyboard with a standard, 5 foot long cord would serve you just as well as a wireless keyboard that could connect to your PC from a mile away.

“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”

This was badly wrong in 2019, and in 2029, the “nonhuman” portion of all computation on Earth will probably be no higher than 1%, so it will still be wrong. But so what? Comparisons of how much raw thinking humans and machines do are misleading since they are “apples to oranges,” and they provide almost no useful insights into the overall state of computer technology or automation.

When it comes to computation, quantity does not equal quality. Consider this example: I estimated that, in 2019, all the world’s computing devices combined did a total of 3.5794 x 1021 flops of computation. Now, if someone invented an AGI that was running on a supercomputer that was, say, ten times as powerful as a human brain, the AGI would be capable of 200 petaflops, or 2.0 x 1017 flops. Looking at the raw figures for global computation, it would seem like the addition of that AI changed nothing: the one supercomputer it was running on wouldn’t even make the global computation count of 3.5794 x 1021 flops increase by one significant digit! However, anyone who has done the slightest thinking about AI’s consequences knows that one machine would be revolutionary, able to divide its attention in many directions at once, and would have inaugurated a new era of much faster economic, scientific, and technological growth that would have been felt by people across the world.

“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”

Rotating computer memories–also called “hard disk drives” (HDD)–were still common in 2019, and will still be in 2029, though less so. This is because HDDs have important advantages over their main competitor, solid-state drives (SSDs), often called “flash drives,” and those advantages will not disappear over this decade.

HDDs are cheaper on a per-bit basis and are less likely to suffer data corruption or data loss. SSDs, on the other hand, are more physically robust since they lack moving parts, and allow much faster access to the data stored in them since they don’t contain disks that have to “spin up.” Given the tradeoffs, in 2029, HDDs will still be widely used in data centers and electronic archive facilities, where they will store important data which needs to be preserved for long periods, but which isn’t so crucial that users need instantaneous access to it. Small consumer electronic devices, including smartphones, smart watches, and other wearables, will continue to exclusively have SSD memory, and finding newly manufactured laptops with anything but SSDs might be impossible. Only a small fraction of desktop computers will have HDDs by then.

So rotating memories will still be around in 2029, meaning the prediction will still be wrong since it contains the absolute term “fully replaced.” But again, so what? All of the data that average people need to see on a day-to-day basis will be stored on SSDs, ensuring they will have instantaneous access to it. The cost of HDD and SSD memory will have continued its long-running, exponential improvement, making both trivially cheap by 2029 (it was already so cheap in 2019 that even poor people could buy enough to meet all their reasonable personal needs). The HDDs that still exist will be out of sight, either in server farms or in big, immobile boxes that are on or under peoples’ work desks. The failure of the prediction will have no noticeable impact, and if you could teleport to a parallel universe where HDDs didn’t exist anymore, nothing about day-to-day life would seem more futuristic.

“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”

The cameras that make use of quantum effects and reflected light never got good enough to exit the lab, and it’s an open question whether they will be commercialized by 2029. I doubt it, but don’t see why it should matter. Billions of cameras–most of them tiny enough to fit on smartphones–already are practically everywhere and will be even more ubiquitous in 2029. It’s not relevant whether they make use of exotic principles to capture video and still images or whether they use through conventional methods involving the capture of visible light. The important aspects of the prediction–that cameras will be very small and all over the place–was right in 2019 and will be even more right in 2029.

“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.”

This prediction was technologically possible in 2019, but didn’t come to pass because many people showed a (perhaps unpredictable) preference for paper books and documents. It turns out there’s something appealing about the tactile experience of leafing through books and magazines and being able to carry them around that PDFs and tablet computers can’t duplicate. Personal computing devices had to become widely available before we could realize old fashioned books and sheets of paper had some advantages.

Come 2029, paper books, magazines, journals, newspapers, memos, and letters will still be commonly encountered in everyday life, so the prediction will still be wrong. Fortunately, the persistence of paper isn’t a significant stumbling block in any way since all important paper documents from the pre-computer era have been scanned and are available over the internet for free or at low cost, and all important new written documents originate in electronic format.

For what it’s worth, I’ve predicted that, in the 2030s, books and computer tablets will merge into a single type of device that could be thought of as a “digital book.” It will be a book with several hundred pages made of thin, flexible digital displays (perhaps using ultra-energy efficient e-ink) instead of paper. At the tap of a button, the text on all of the pages will instantly change to display whichever book the user wanted to read at that moment. They could also be used as notebooks in which the user could hand write or draw things with a stylus, which would be saved as image or text files. The devices will fuse the tactile appeal of old-fashioned books with the content flexibility of tablet computers.

“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”

3D volumetric displays didn’t advance nearly as fast as Kuzweil predicted, so this was wrong in 2019, and the technology doesn’t look poised for a breakthrough, so it will still be wrong in 2029. However, it doesn’t matter since VR goggles and probably AR glasses as well will let people have the same holographic experiences. By 2029, you will be able to put on eyewear that displays lifelike, moving images of other people, giving the false impression they are around you. Among other things, this technology will be used for video calls.

“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”

The haptic/kinetic/touch aspect of virtual reality is very underdeveloped compared to its audio and visual aspects, and will still lag far behind in 2029, but little will be lost thanks to this. After all, if you’re playing a VR game, do you want to be able to feel bullets hitting you, or to feel the extreme temperatures of whatever exotic virtual environment you’re in? Even if we had skintight catsuits that could replicate physical sensations accurately, would we want to wear them? Slipping on a VR headset that covers your eyes and ears is fast and easy–and will become even more so as the devices miniaturize thanks to better technology–but taking off all your clothes to put on a VR catsuit is much more trouble.

A VR headset is made of smooth metal and high-impact plastic, making it easy to clean with a damp a rag. By contrast, a catsuit made of stretchy material and studded with hard servos, sensors and other little machines would soak up sweat, dirt and odors, and couldn’t be thrown in the washing machine or dryer like a regular garment since its parts would get damaged if banged around inside. It’s impractical.

“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”

I doubt that VR body suits and VR “booths” will be able to satisfactorily replicate anything but a narrow range of sex acts. Given the extreme importance of tactile stimulation, the setup would have to include a more expensive catsuit. There would also need to be devices for the genitals, adding more costs, and possibly other contraptions to apply various types of physical force (thrust, pull, resistance, etc.) to the user. Cleanup would be even more of a hassle. [Shakes head]

The fundamental limits to this technology are such that I don’t think it will ever become “popular” since VR sex will fall so far short of the real thing. That said, I believe another technology, androids, will be able to someday “do it” as well as humans. Once they can, androids will become some of the most popular consumer devices of all time, with major repercussions for dating, marriage, gender relations, and laws relating to sex and prostitution. They would let any person, regardless of social status, looks, or personality, to have unlimited amounts of “sex,” which is unheard of in human history. Just don’t expect it until near the end of this century!

“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”

As with replacing all books with PDFs on computer displays, there was no technological barrier to this in 2019, but it didn’t happen because most transactions remained face-to-face, and because people preferred online transactions involving simple button-clicks rather than drawn-out conversations with fake human salesmen. The consumer preferences were not clear when the prediction was made in 1998.

By 2029, the prediction will still be wrong, though it won’t matter, since buying things by simply clicking on buttons and typing a few characters is faster and much less aggravating than doing the same transactions through a “simulated person.” Anyone who has dealt with a robot operator on the phone that laboriously enunciates menu options and obtusely talks over you when you are responding will agree. It would be a step backwards if that technology became more widespread by 2029.

“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”

Sensors and transmitters that could guide cars were never installed along roadways, but it didn’t turn out to be a problem since we found that cars could use GPS and their own onboard sensors to navigate just as well. So the prediction was wrong, and the expensive roadside networks will still not exist in 2029, but it won’t matter.

The second part of the prediction will be half right by 2029, and it’s failure to be 100% right will be consequential. By then, autonomous cars will be statistically safer than the average human driver and will be in the “human range” of “efficiency,” albeit towards the bottom of the range: they will still be overly cautious, slowing down and even stopping whenever they detect slightly dangerous conditions (e.g. – erratic human driver nearby, pedestrian who looks like they might be about to cross the road illegally, heavy rain, dead leaves blowing across the road surface). In short, they’ll drive like old ladies, which will be annoying at times.

While the technology will be cheaper and more widely accepted, it will still be a luxury feature in 2029 that only a minority of cars in rich countries have. At best, a token number of public roads worldwide will ban human-driven vehicles. Enormous numbers of lives will be lost in accidents, and billions of dollars wasted in traffic jams each year thanks to autonomous car technology not advancing as fast as Kurzweil predicted.

“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”

In 2019, the sports industry had the highest revenues in the entertainment sector, totaling $480 – $620 billion. That year, the VR gaming industry generated a paltry $1.2 billion in revenue, so the prediction was badly wrong for 2019. And even if the latter grows twentyfold over this decade, which I think is plausible, it won’t come close to challenging the dominance of sports.

That said, looking at revenues is kind of arbitrary. The spirit of the prediction, which is that VR gaming will become a very popular and common means of entertainment, will be right by 2029 in rich countries, and it will only get more widespread with time.

“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”

The devices are already built into some smartwatches, and will be “widely used” by any reasonable metric by 2029. I don’t think they will be shrunk to the sizes of jewelry like rings and earrings, but that won’t have any real consequences since the watches will be available. No one in 2029 will say “I’m really concerned about my heart problem and want to buy a wearable monitoring device, but my health is not so important that I would want to trouble myself with a watch. However, I’d be OK with a ring.”

Health monitoring devices won’t be built into articles of clothing for the same reasons that other types of computers won’t be built into them: 1) laundering and drying the clothes would be a hassle since water, heat and being banged around would damage their electronic parts and 2) you’d have to remember to always wear your one shirt with the heartbeat monitor sewn into it, regardless of how appropriate it was for the occasion and weather, or how dirty it was from wearing it day after day. It makes much more sense to consolidate all your computing needs into one or two devices that are fully portable and easy to keep clean, like a smartphone and smartwatch, which is why we’ve done that.

Links:

  1. Rotating computer memories (HDDs) are cheaper and more reliable than solid-state memories (SSDs). Those advantages are unlikely to disappear, meaning HDDs will still be around in 2029.
    https://www.computerweekly.com/feature/Spinning-disk-hard-drives-Good-value-for-many-use-cases
  2. Even old-fashioned computer tapes will still be around in 2029, as they’re even better-suited for long-term data storage (called “cold storage”).
    https://www.economist.com/science-and-technology/2020/12/15/magnetic-tape-has-a-surprisingly-promising-future

How Ray Kurzweil’s 2019 predictions are faring

In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.

Ray Kurzweil receiving a technology award from President Clinton in 1999.

Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.

I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.

With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.

“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”

RIGHT

A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.

“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”

MOSTLY WRONG

The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).

Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.

Pokemon Go is an augmented reality video game, and has been downloaded over 1 billion times.

However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.

So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.

The augmented reality glasses will also let you experience virtual reality.

WRONG

Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.

Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.

The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.

PARTLY RIGHT

The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.

“Magic Leap One”

The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).

Microsoft “Hololens”

Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.

The “Vuzix Blade”

The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.

It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.

A man wearing HTC Vive virtual reality goggles, with hand controllers.

And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.

“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”

MOSTLY RIGHT

Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.

By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.

The Bose Frames sunglasses have small sound speakers built into them, close to the wearer’s ears.

Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.

The Apple Airpod wireless earbuds are, like most Apple products, status objects like jewelry.

Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”

Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.

The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.

“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “

MOSTLY WRONG

Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.

Gartner’s estimates of global personal computer (PC) sales in 2018. The numbers for 2019 will be nearly the same.

The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.

The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.

If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?

You see where this is going…

Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.

Virtual reality game setups, like those offered by Oculus, commonly make use of hand controllers like these, which monitor the locations and movements of the player’s hands and translate them into in-game commands. This is an example of gestural control. Several million people now have advanced VR game systems like this.

Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.

Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.

Users interface with the “Gen 2” Amazon Echo through two-way spoken communication. The device is popular and highly reviewed and only costs $100, putting it within reach of hundreds of millions of households.

“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.

The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.

“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”

WRONG

The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.

My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.

“Typically, people do not own just one specific ‘personal computer’…”

RIGHT

A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).

The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.

“Computing and extremely high-bandwidth communication are embedded everywhere.”

MOSTLY RIGHT

This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?

First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”

Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere’” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.

Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.

Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.

In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.

Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”

Cables have largely disappeared.”

MOSTLY RIGHT

Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.

That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.

Wireless keyboards and mice are affordable, but still significantly more expensive than their wired counterparts.

On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.

No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.

And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.

This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.

“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]

WRONG

Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).

A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.

$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.

6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.

This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.

The SuperMUC-NG supercomputer fills a large room and is as powerful as one human brain.

Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.

I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.

“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”

WRONG

Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.

Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.

Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.

However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.

Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!

Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.

What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.

3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.

Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.

“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”

WRONG

For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.

I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.

The HP Pavilion 590-p0033w has a rotating hard disk drive, indicated by the “7200 RPM” (revolutions per minute) speed figure on the front of this box. It also says it has a “DVD-Writer.” This is a newly manufactured machine, and at $499, is a mid-ranged desktop.
The HP Slim Desktop 290-p0043w also has a rotating hard disk drive, with a 7200 RPM speed.
And before anyone says “Well, only the clunky, old-fashioned desktops still have rotating disk drives!” check out this low-end (but newly manufactured) laptop I also found at Wal-Mart. The HP 15-bs212wm has a rotating hard disk drive and a DVD drive.

If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.

“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”

MOSTLY WRONG

Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.

An internal diagram of a common MicroSD card, which has the simple job of storing data. It has about 18 layers. Memory storage chips are less sensitive to manufacturing defects since they have redundancy.
An exploded diagram of Intel’s upcoming “Lakefield” processor, which has the complex job of storing and processing data. It has four layers, and is much more technically challenging to make than a 3D memory chip.

In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.

Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.

It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.

“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”

UNCLEAR

Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)

Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.

Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.

“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”

PARTLY RIGHT

The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.

Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).

If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.

“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”

RIGHT

This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”

The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.

I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.

Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.

“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”

WRONG

Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.

Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.

A teardown of a Samsung Galaxy S10 smartphone reveals its three digital cameras, which produce very high-quality photos and videos. Comparing them to the tweezers and human fingers, it’s clear they are only as big as small coins.

Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.

This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”

“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”

WRONG

A state-of-the-art microscopic machine invented in 2019 can move around in water by twirling its four “flippers.”

While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.

In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.

Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.

Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”

“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”

RIGHT

The Samsung Galaxy Tab S5 is, by any reasonable account, extremely thin and very high resolution, and it weighs ounces. New, it costs less than $500, making it affordable for millions of average people. There are even better tablet computers than this.

The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.

The smartphones of 2019 also meet Kurzweil’s criteria.

“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.

MOSTLY WRONG

A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)

The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.

The bar chart shows yearly sales of paper books in the U.S. Sales declined in the early 2010s due to the debut of e-readers and smartphones, but then they recovered a great deal. Books aren’t dead.

Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.

Sales of “graphic paper” have dropped in rich countries over the last 20 years and will also start dropping in poor countries soon.

Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.

Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”

RIGHT

The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.

Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.

It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.

For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)

The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.

The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.

The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.

Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.

“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”

WRONG*

The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.

However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.

Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.

The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.

“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”

RIGHT

As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.

“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”

MOSTLY RIGHT

First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.

Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote earlier, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”

With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.

However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.

At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.

“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”

WRONG

This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.

In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.

If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.

“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”

PARTLY RIGHT

As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.

Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.

Apps that are made specifically for blind people are not in wide usage among sighted people.

“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”

MOSTLY RIGHT

Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”

The “Argus II” system consists of an electronic device surgically implanted in a person’s retina which receives vision data from externally-worn camera glasses and a data processing unit.

The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.

“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”

MOSTLY WRONG

“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored earlier in this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.

A person wearing Vuzix Blade glasses can download the “Zoi Meet” app into the device and have subtitles of spoken words displayed across their field of vision.

However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.

Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.

In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.

“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”

RIGHT

There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.

The “Not Impossible” music suit.

That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.

There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.

“Cochlear and other implants for improving hearing are very effective and are widely used.”

RIGHT

Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.

Cochlear implants provide increasing benefits to users as their technology improves.
Cochlear implant sales have been increasing in the U.S. as more deaf people have the devices installed. Some deaf people fear the technology will make their culture extinct.

Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.

“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”

WRONG

Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.

“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”

WRONG (sadly)

As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.

Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.

“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”

PARTLY RIGHT

While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”

“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”

WRONG

As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.

Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.

“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”

MOSTLY WRONG

As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.

Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.

A depiction of a human eye’s horizontal field of view.

A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.

If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”

The “Varjo VR-1” virtual reality goggles

No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.

Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.

VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.

“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”

MOSTLY RIGHT

In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.

The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.

“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”

MOSTLY RIGHT

Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.

As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).

“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”

WRONG

Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.

“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”

WRONG

Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.

“Rapid economic expansion and prosperity has continued.”

PARTLY RIGHT

Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?

U.S. real GDP growth rate (year-over-year)

The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.

OECD GDP growth rate from 1998 – 2019

Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.

Global GDP growth rate from 1998 – 2019

And this final chart shows global GDP growth over the same period.

Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.

At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.

While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.

“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”

WRONG

“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.

As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.

“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”

MOSTLY WRONG

The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.

“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”

PARTLY RIGHT

Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.

The iRobot Roomba 960 is a highly rated robot vacuum cleaner.

No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.

“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”

WRONG*

The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.

Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.

*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.

In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.

“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”

UNCLEAR (but probably WRONG)

The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?

A man on a flying skateboard participated in France’s 2019 Bastille Day military parade. The device counts as a “personal flying vehicle,” but it is impractical and very dangerous to use. It can travel about five miles in 10 minutes on one full tank of fuel, and can take off and land almost anywhere. Is it “efficient”?

What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?

But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.

Diagram showing what a “Gurney flap” / “microflap” is.

At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.

“There are very few transportation accidents.”

WRONG

Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.

In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.

“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”

WRONG

As I noted earlier in this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.

“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”

MOSTLY RIGHT

Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.

While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.

If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.

Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.

“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”

MOSTLY RIGHT

Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.

Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.

In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.

At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.

“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”

RIGHT

In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.

Edward Snowden

Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.

During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.

With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).

Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.

“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”

RIGHT

It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.

In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”

The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.

There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.

“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”

PARTLY RIGHT

As I wrote earlier, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.

From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.

This means the prediction was partly right, but also of questionable significance in the first place.

“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”

MOSTLY RIGHT

The “Deep Dream” computer program made this surrealist portrait.

In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.

“Portrait of Edmond de Belamy”

In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)

Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:

Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).

Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.

Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:

And they have seen the last light fail;
By day they kneel and pray;
But, still they turn and gaze upon
The face of God to-day.

And God is touched and weeps anew
For the lost souls around;
And sorrow turns their pale and blue,
And comfort is not found.

They have not mourned in the world of men,
But their hearts beat fast and sore,
And their eyes are filled with grief again,
And they cease to shed no tear.

And the old men stand at the bridge in tears,
And the old men stand and groan,
And the gaunt grey keepers by the cross
And the spent men hold the crown.

And their eyes are filled with tears,
And their staves are full of woe.
And no light brings them any cheer,
For the Lord of all is dead

In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.

“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”

UNCLEAR

It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?

Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)

If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.

“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”

WRONG

Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.

That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).

Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.

“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”

MOSTLY WRONG

Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.

In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.

Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.

“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.

Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.

Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.

“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”

UNCLEAR

What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.

What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.

A “Phantom” ultralight plane. Is it fair to call this “tiny”?

“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).

With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.

If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.

The RQ-11B Raven drone could be considered a “tiny flying weapon.”

If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.

At 100mm long and 120mm wide along its main rotor, the PD-100 drone is as small as a large dragonfly.

The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.

Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.

“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”

PARTLY RIGHT

The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.

The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.

The cost of sequencing a human genome sharply dropped, making it possible to do genome-wide association studies, and for middle income people to have their personal genomes sequenced.

However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.

Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.

The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.

“The expected life span…[is now] over one hundred.”

WRONG

The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.

“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”

WRONG

Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.

Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.

As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.

Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.

“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”

RIGHT

The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.

Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.

“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”

MOSTLY RIGHT

Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.

The Apple Watch Series 5

By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.

The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).

A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.

Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.

Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.

“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”

RIGHT

In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:

Kurzweil: How old are you?
Eugene: I’m a little boy. 13 years old.

Kurzweil: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!

Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

Kurzweil: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want it, the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!

In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.

“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”

RIGHT

The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:

Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.

“Sophia” the robot

Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.

“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”

RIGHT

Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.

Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.

Links:

  1. Ray Kurzweil’s self-analysis of how accurate his 2009 predictions were: (https://kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf)
  2. The inventor of the first augmented reality contact lenses predicted in 2015 that commercially viable versions of the devices wouldn’t exist for at least 20 more years.
    (https://www.inverse.com/article/31034-augmented-reality-contact-lenses)
  3. In late 2019, a Magic Leap One cost $2,300 – $3,300 and a Hololens was $3,000. 
    https://www.cnn.com/2019/12/10/tech/magic-leap-ar-for-companies/index.html
  4. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.
    (https://www.theverge.com/2019/5/16/18625238/vr-virtual-reality-headsets-oculus-quest-valve-index-htc-vive-nintendo-labo-vr-2019)
  5. In 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs. Keyboards aren’t dead.
    (https://venturebeat.com/2019/01/10/gartner-and-idc-hp-and-lenovo-shipped-the-most-pcs-in-2018-but-total-numbers-fell/)
  6. Survey data from 2018 about the global usage of “digital personal assistants.” Users speak to their smartphones or smart speakers, mostly to obtain simple information (like weather forecasts) or to have their computers do simple tasks. (https://www.business2community.com/infographics/the-growth-in-usage-of-virtual-digital-assistants-infographic-02056086)
  7. 2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones.
    (https://www.pewresearch.org/internet/fact-sheet/mobile/)
  8. A 2015 American Community Survey revealed that households headed by people over 64 were the least likely to have smartphones, PCs, or internet access.
    (https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-37.pdf)
  9. In 2000, 34% of Americans accessed the internet through dial-up modems, and only 3% did so through “broadband” (a catch-all for cable, DSL, and satellite access). Most U.S. internet users were still using dial-up modems that were at most 56k. The remaining 63% didn’t access it at all.
    (http://thetechnews.com/2016/01/03/usa-getting-faster-internet-speeds-but-not-at-the-pace-others-are/)
  10. In 2019, a mid-tier internet service plan in the U.S. granted users download speeds of 30 – 60 Mbps.
    (https://www.pcmag.com/news/state-by-state-the-fastest-and-slowest-us-internet)
  11. 2019 U.S. mobile phone network average speeds were 33.88 Mbps for downloads and 9.75 Mbps for uploads.
    (https://www.speedtest.net/reports/united-states/ )
  12. The Black Friday 2019 circular for Newegg.com featured five models of printers for sale. Only one of them, the Brother HL-L2300D, wasn’t WiFi-capable.
    (https://bestblackfriday.com/ads/newegg-black-friday/page-12#ad_view)
  13. Gartner figures for global computer sales in 2015, 2016, 2017, 2018 and 2019.
    (https://www.gartner.com/en/newsroom/press-releases/2017-01-11-gartner-says-2016-marked-fifth-consecutive-year-of-worldwide-pc-shipment-decline)
    (https://venturebeat.com/2018/01/11/gartner-and-idc-agree-hp-shipped-the-most-pcs-in-2017/)
    (https://www.gartner.com/en/newsroom/press-releases/2020-01-13-gartner-says-worldwide-pc-shipments-grew-2-point-3-percent-in-4q19-and-point-6-percent-for-the-year)
  14. Intel’s i7 Generation 8 processor is capable of 361.3 gigaflop speeds. (https://www.pugetsystems.com/labs/hpc/Skylake-X-7800X-vs-Coffee-Lake-8700K-for-compute-AVX512-vs-AVX2-Linpack-benchmark-1068/)
  15. 3.2 billion people owned a smartphone in 2019.
    (https://newzoo.com/insights/trend-reports/newzoo-global-mobile-market-report-2019-light-version/)
  16. In 2019, 3D chips were common in memory storage devices, like MicroSD cards. 3D NAND chips had up to 64 layers.
    (https://semiengineering.com/what-happened-to-nanoimprint-litho/)
  17. In 2019, Intel was still working the kinks out of its first 3D computer processor, called “Lakefield,” and it wasn’t ready for commercial sales.
    (https://www.overclock3d.net/news/cpu_mainboard/intel_details_their_lakefield_processor_design_and_foveros_3d_packaging_tech/1)
  18. In 2019, computer circuits made of carbon nanotubules were still stuck in research labs, and held back from commercialization by many unsolved problems relating to cost of manufacture and reliability. Silicon was still the dominant computing substrate.
    (https://www.sciencenews.org/article/chip-carbon-nanotubes-not-silicon-marks-computing-milestone)
  19. “Compute cycle” has three meanings: #1 (https://www.zdnet.com/article/how-much-is-a-unit-of-cloud-computing/), #2 (https://www.quora.com/What-is-a-Compute-cycle) and #3 (https://www.computerhope.com/jargon/c/compute.htm)
  20. In a 2019 experiment, researchers were able to decode the words a person was speaking by studying their brain activity.
    (https://www.biorxiv.org/content/10.1101/350124v2)
  21. “The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.”  –Marvin Minsky, 2013
    (https://youtu.be/3PdxQbOvAlI)
  22. “Today’s neural nets use algorithms that were essentially developed in the early 1980s.”
    (https://futurism.com/cmu-brain-research-grant
  23. The inventor of “back-propagation,” which spawned many computer algorithms central to AI research, now believes it will never lead to true intelligence, and that the human brain doesn’t use it.
    (https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html)
  24. Henry Markram’s project to create a human brain simulation by 2019 failed.
    (https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/)
  25. “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.” –Yann LeCun, 2017
    (https://www.theverge.com/2017/10/26/16552056/a-intelligence-terminator-facebook-yann-lecun-interview)
  26. Machine neural networks are similar to human brains in key ways.
    (https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414)
  27. Some machine neural nets use genetic algorithms.
    (https://blog.coast.ai/lets-evolve-a-neural-network-with-a-genetic-algorithm-code-included-8809bece164)
  28. Quantum imaging is a real thing. However, devices that can make use of it are still experimental.
    (https://onlinelibrary.wiley.com/doi/full/10.1002/lpor.201900097)
  29. The Samsung Galaxy S10 is an upper-end smartphone released in 2019. It has three digital cameras, all of which operate on the same technology principles as the digital cameras of 1999.
    (https://www.digitalcameraworld.com/reviews/samsung-galaxy-s10-camera-review)
  30. The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines.
    (https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines)
  31. Dr. Marc Miskin’s micromachines from 2019 are interesting, but a far cry from what Kurzweil thought we’d have by then.
    (https://www.inquirer.com/health/micro-robots-upenn-cornell-20190307.html)
  32. There were less than 1 million augmented reality glasses in the world at the end of 2019. 
    https://arinsider.co/2019/09/11/5-million-ar-headsets-by-2023/
  33. Sales of print books in 2017 were not much different from what they probably were in 1999, when the Age of Spiritual Machines was published. 
    (https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/75735-sales-of-print-books-increased-slightly-in-2017.html)
  34. Sales figures for “graphic paper” prove that, while paper books, newspapers, and office documents are declining, they aren’t “dead” or even “uncommon” yet. 
    (https://www.mckinsey.com/industries/paper-forest-products-and-packaging/our-insights/graphic-paper-producers-boosting-resilience-amid-the-covid-19-crisis)
  35. The “Internet Archive” has scans of 3.8 million books, and is growing. 
    (https://www.pcmag.com/news/the-internet-archive-is-linking-digital-books-to-wikipedia-citations)
  36. By late 2019, the U.S. National Archives had put 92 million pages of government documents on its website, free for anyone to view. 
    (https://narations.blogs.archives.gov/2019/10/02/naras-record-group-explorer-a-new-path-into-naras-holdings/)
  37. The 2020 report COVID-19 on Campus found that most U.S. college students found online instruction an inferior way to learn compared to traditional classroom instruction.
    (https://marketplace.collegepulse.com/img/covid19oncampus_ckf_cp_final.pdf)
  38. Another 2020 survey of U.S. teenagers found that most of them considered online learning to be less effective than in-person classes.
    (https://www.surveymonkey.com/curiosity/common-sense-media-school-reopening/)
  39. A 2020 survey of U.S. teachers and school administrators found that student absenteeism rates climbed thanks to the introduction of online classes.
    (https://www.edweek.org/ew/articles/2020/10/15/in-person-learning-expands-student-absences-up-teachers.html)
  40. A U.S. Census survey found in 2019 that 17% of students didn’t have computers in their homes and 18% had no internet access or very slow service.
    (https://apnews.com/article/7f263b8f7d3a43d6be014f860d5e4132)
  41. The “Seeing AI” smartphone app uses the device’s camera to recognize text, objects and people and to read, describe, or name them out loud. Blind users have highly reviewed it.
    (https://apps.apple.com/us/app/seeing-ai/id999062298#see-all/reviews)
  42. The “BlindSquare” smartphone app provides voice-based GPS navigation to users, and is also highly reviewed by blind people.
    (https://apps.apple.com/us/app/blindsquare/id500557255#see-all/reviews)
  43. The FDA approves the “Argus II” retinal implant system for the blind in 2013.
    (https://www.nature.com/news/fda-approves-first-retinal-implant-1.12439)
  44. In 2019, an app called “Zoi Meet” was developed for the Vuzix Blade AR glasses. The app produces real-time subtitles of spoken words, displayed across the wearer’s field of vision.
    (https://www.vuzix.com/Blog/vuzix-blade-real-time-language-transcription-zoi-meet)
  45. In 2019, there were many smartphone apps that helped deaf people to communicate with hearing people.
    (https://www.meriahnichols.com/best-deaf-apps/)
    (https://abilitynet.org.uk/news-blogs/9-useful-apps-people-who-are-deaf-or-have-hearing-loss)
  46. “Glide” is a popular video phone app among deaf people.
    (https://www.fastcompany.com/3054050/how-video-chat-app-glide-got-deaf-people-talking)
  47. “BW Dance” is an app that converts songs into patterns of vibrations that flashing lights that deaf people can experience.
    (https://www.producthunt.com/posts/bw-dance)
  48. “Not Impossible Labs” makes body suits that allow deaf people to experience music in the form of complex patterns of vibrations.
    (https://www.billboard.com/articles/news/8476553/not-impossible-labs-live-music-deaf)
  49. Cochlear implants have gotten better and more common among deaf people as time has passed.
    (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111484/)
  50. U.S. sales growth of cochlear implants is projected to continue.
    (https://www.grandviewresearch.com/industry-analysis/cochlear-implants-industry)
  51. Aside from cochlear implants, middle ear implants, auditory brainstem implants, and bone-anchored hearing aids can amplify or restore hearing.
    (https://www.bcig.org.uk/cochlear-implant-devices/implantable-devices/)
  52. People who are blind, or deaf, or who have serious spinal cord damage are less likely to have jobs and also make less money than people who don’t have those conditions.
    (https://www.afb.org/research-and-initiatives/employment/reviewing-disability-employment-research-people-blind-visually)
    (https://www.nationaldeafcenter.org/news/employment-report-shows-strong-labor-market-passing-deaf-americans)
    (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792457/)
  53. A 2018 survey found that most American adults spent an average of 24-41 minutes per day on phone calls. The survey didn’t break that number out into traditional voice-only calls and video calls.
    (https://www.zdnet.com/article/americans-spend-far-more-time-on-their-smartphones-than-they-think/)
  54. Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations.
    (https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/)
  55. A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible.
    (https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf)
  56. The “Varjo VR-1” virtual reality goggles cost $6,000 and can display lifelike images at the centers of their screens.
    (https://www.cnet.com/news/the-best-vr-display-ive-ever-seen-varjo-vr-1-costs-6000/)
  57. A roundup of the top ten speech-to-speech language translation apps of 2019.
    (https://www.daytranslations.com/blog/top-10-free-language-translation-apps/)
  58. A 2018 study found that the best English-Mandarin machine translation programs were inferior to professional human translators.
    (https://www.technologyreview.com/2018/09/05/140487/human-translators-are-still-on-top-for-now/)
  59. The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device.
    (https://www.cnet.com/reviews/oculus-go-review/)
  60. As this 2019 article makes clear, virtual haptic technology is far less advanced than Kurzweil predicted it would be.
    (https://www.scientificamerican.com/article/new-virtual-reality-interface-enables-touch-across-long-distances/)
  61. An account of a firsthand experience with cutting-edge (no pun intended) teledildonics in 2018:
    (https://www.engadget.com/2018-07-02-flirt4free-teledildonics-long-distance-sex.html)
  62. A 2019 analysis shows that the vast majority of transactions in the U.S. are still done face-to-face between humans, but e-commerce’s share is steadily growing.
    (https://www.digitalcommerce360.com/article/us-ecommerce-sales/)
  63. A roundup of the highest-rated robot vacuum cleaners of 2019:
    (https://www.techhive.com/article/3388038/best-robot-vacuums-on-amazon.html)
  64. A list of advanced car safety features from 2019:
    (https://www.caranddriver.com/features/g27612164/car-safety-features/)
  65. Tesla Autopilot is capable of Level 3 autonomous driving. However, out of an abundance of caution (e.g. – just one accident generates enormous bad publicity), the company has installed features that cap it at Level 2.
    (https://electrek.co/2019/09/19/tesla-autopilot-v10-commute-without-driver-intervention/)
  66. French inventor Franky Zapata designed a flying skateboard called the “Flyboard Air,” and used it to cross the English Channel and wow crowds during the 2019 Bastille Day military parade.
    (https://www.theverge.com/2019/8/4/20753648/jet-powered-hoverboard-english-channel-crossing-franky-zapata-success)
  67. These World Health Organization reports show that deadly road accidents were about as common in 2016 as they were in 2000. It’s still a leading cause of death.
    (https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death)
    (https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1)
  68. The CDC reported that 43,024 people died in the U.S. in 2017 of “Transport accidents.” Only 1,718 of those did not involve road vehicles.
    (https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09_tables-508.pdf)
  69. Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could.
    (https://www.bbc.com/news/business-47700701)
  70. Andrew Yang made technological unemployment and universal basic income (UBI) major components of his 2020 U.S. Presidential campaign platform.
    (https://en.wikipedia.org/wiki/Andrew_Yang#2020_presidential_campaign)
  71. An article explaining “acoustic gunshot detection”:
    (https://www.eff.org/pages/gunshot-detection)
  72. The “ShotSpotter” gunshot detection system was emplaced in over 100 cities in 2019.
    (https://www.startribune.com/as-gunfire-continues-in-st-paul-so-does-shotspotter-debate/565382652/)
  73. This 2019 article from Dayton shows a correlation between the presence of license plate readers and a decrease in violent crime.
    (https://www.daytondailynews.com/news/area-police-look-to-license-plates-readers-as-crime-fighting-tool/ESQLILHQP5HJTCIVJL6IJ6T7VU/)
  74. In 2018, a wanted criminal was arrested in China after facial recognition cameras identified him at a concert, out of a crowd of 60,000 people.
    (https://www.bbc.com/news/world-asia-china-43751276)
  75. Edward Snowden’s key revelations about electronic spying.
    (https://mashable.com/2014/06/05/edward-snowden-revelations/)
  76. An incomplete list of data hacks that happened in the 2010s. Hundreds of millions of people had important personal data compromised.
    (https://www.cnn.com/2019/07/30/tech/biggest-hacks-in-history/index.html)
  77. A list of commonly used encrypted messaging apps in 2019. (https://heimdalsecurity.com/blog/the-best-encrypted-messaging-apps/)
  78. In 2018, VPNs were widely used on every continent. Forty-four percent of Indonesian internet users had them.
    (https://blog.globalwebindex.com/chart-of-the-day/vpn-usage-2018/)
  79. If obesity rates are any indication, people in the 2010s were not too poor to feed themselves.
    (https://academic.oup.com/eurpub/article/23/3/464/536242)
  80. In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve.
    (https://www.factcheck.org/2013/03/bloombergs-obesity-claim/)
  81. Several studies done during the 2010s indicated that governments would save money if they gave the homeless free apartments.
    (https://www.vox.com/2014/5/30/5764096/homeless-shelter-housing-help-solutions)
  82. A 2016 article about Google’s “Deep Dream” program, which can make surreal, artistic images.
    (https://www.theguardian.com/artanddesign/2016/mar/28/google-deep-dream-art)
  83. A computer-generated painting, “Portrait of Edmond de Belamy,” sold for $423,500 in 2018. Have YOU ever made a painting worth that much money?
    (https://edition.cnn.com/style/article/obvious-ai-art-christies-auction-smart-creativity/index.html)
  84. “Obvious” is a “collective” of humans and computers that produce acclaimed art.
    (https://obvious-art.com/page-about-obvious/)
  85. “EMMY” is a machine that can write decent instrumental songs.
    (https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/)
  86. Google’s “Open JukeBox” could even write songs that had simulated human voices singing.
    (https://openai.com/blog/jukebox/)
  87. Samples of GPT-2’s poetry.
    (https://www.gwern.net/GPT-2)
  88. Samples of GPT-2’s short news articles and written responses to prompts.
    (https://openai.com/blog/better-language-models/)
  89. “Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”?
    (https://en.wikipedia.org/wiki/Auto-Tune)
  90. The virtual reality gaming industry had about $1.2 billion in revenues in 2019.
    (https://www.juniperresearch.com/press/press-releases/virtual-reality-games-revenues-reach-8-bn-2023)
  91. In 2017, terrorists killed 14,300 people globally.
    (https://www.jewishvirtuallibrary.org/statistics-on-incidents-of-terrorism-worldwide)
  92. The U.S. spent $16.6 billion on cybersecurity in FY2019.
    (https://www.fedscoop.com/cybersecurity-budget-2020-trump-white-house/)
  93. The U.S. military’s “base” defense budget was $726.2 billion in FY2019.
    (https://fas.org/sgp/crs/natsec/R44519.pdf)
  94. The U.S. spent $33.6 billion on its nuclear forces in FY2019.
    (https://www.cbo.gov/system/files/2019-01/54914-NuclearForces.pdf)
  95. The “Phantom X1” ultralight plane.
    (https://en.wikipedia.org/wiki/Phantom_X1)
  96. Data for several “tiny” flying drones in use with the U.S. Navy in 2019.
    (https://www.navy.mil/DesktopModules/ArticleCS/Print.aspx?PortalId=1&ModuleId=724&Article=2159299)
  97. Data on the U.S. Army’s unmanned drones, including “tiny” ones, from the same period.
    (https://fas.org/irp/program/collect/uas-army.pdf)
  98. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles.
    (https://www.csis.org/analysis/us-military-forces-fy-2020-air-force)
  99. We recently discovered how a mutation in the “SF3B1” gene changes intracelluar activity in ways that raise cancer risk.
    (https://www.fredhutch.org/en/news/center-news/2019/10/sf3b1-cancer-mutation.html)
  100. The Human Genome Project led to major cost improvements to gene sequencing technology, and to the discovery of many disease-associated genes.
    (https://unlockinglifescode.org/learn/human-genome-project)
  101. We have a better understanding of how cell-level molecular machinery contributes to aging.
    (https://pure.au.dk/ws/files/52135662/DemirovicRattanExpGer13.pdf)
  102. Official 2018 life expectancy figures for the U.S. and Japan:
    (https://www.cdc.gov/nchs/products/databriefs/db355.htm)
    (https://www.nippon.com/en/features/h00250/life-expectancy-for-japanese-men-and-women-at-new-record-high.html)
  103. The 2019 Worldwide Threat Assessment barely mentions biological weapons.
    (https://www.dni.gov/files/ODNI/documents/2019-ATA-SFR—SSCI.pdf)
  104. Pfizer’s COVID-19 vaccine is the first to incorporate mRNA. The new technology could lead to other vaccines that save millions of lives.
    (https://www.wfaa.com/article/news/health/coronavirus/vaccine/what-is-an-mrna-covid-19-vaccine-and-how-does-it-differ-from-other-vaccines/287-240b8181-f13f-47a4-9514-9b6b30988d32)
    (http://www.rationaloptimist.com/blog/mrna-vaccines-could-revolutionise-medicine/)
  105. Several smart watches available in 2019 had ECG monitors.
    (https://www.reviewsbreak.com/best-ecg-smartwatch/)
    (https://www.theverge.com/2018/9/13/17855006/apple-watch-series-4-ekg-fda-approved-vs-cleared-meaning-safe)
  106. In 2019, Apple Watches with ECG monitors detected atrial fibrillation events in almost 2,000 people.
    (https://news.trust.org/item/20190316134851-5cktc/)
  107. The Apple Watch’s “hard fall” detection feature might have already saved the lives of several injured people.
    (https://www.nbcnews.com/news/us-news/apple-watch-s-hard-fall-feature-automatically-calls-911-hiker-n1070471)
  108. The “HeartGuide” smart watch can monitor blood pressure.
    (https://www.medtechdive.com/news/fda-cleared-wearable-blood-pressure-device-hits-market/544908/)
  109. The media wrongly declared in 2014 the “Eugene Goostman” had passed the Turing Test.
    (https://www.bbc.com/news/technology-27762088)
    (https://www.kurzweilai.net/mt-notes-on-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test)
  110. Google’s “Duplex” AI could masquerade as human for short conversations.
    (https://digital.hbs.edu/platform-rctom/submission/google-duplex-does-it-pass-the-turing-test/)
  111. The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks.
    (https://www.ersj.eu/journal/1245)
  112. Facebook’s image recognition feature relied on a massive training set of data prepared by humans.
    (https://engineering.fb.com/2018/05/02/ml-applications/advancing-state-of-the-art-image-recognition-with-deep-learning-on-hashtags/)

Interesting articles, April 2021

LED lightbulbs are now cheaper than CFL bulbs and no more expensive than incandescent bulbs (at least, at my local Home Depot, where I took the above photos). Considering the fact that LEDs use less electricity and last much longer than either of the other two, and emit light with the same spectral qualities, it makes no economic or aesthetic sense to buy anything but LEDs. This also means another of my future predictions has come true:

“[During the 2020s] LED light bulbs will become as cheap as CFL and even incandescent bulbs. It won’t make economic sense NOT to buy LEDs, and they will establish market dominance.”

Here’s an article from 2015 about the rise of LED lightbulbs and “smart bulbs,” which can send and receive data via Wifi.
https://www.nytimes.com/2015/01/22/garden/the-rise-of-the-smartbulb.html

A funny list of wrong predictions.
https://www.2spare.com/item_50221.html

During its 2.4 million years of existence, about 2.5 billion Tyrannosaurus rex lived and died.
https://www.axios.com/t-rex-billion-dinosaur-population-estimates-study-bbee965b-268c-4afc-9dc7-f9f9901ab080.html

Here’s an old but interesting AMD presentation about the feasibility of building a “holodeck” using real technology. The illustrations show a dome-shaped room the user stands at the center of.
https://www.slideshare.net/AMD/amd-isscc-keynote

The U.S. Navy’s Littoral Combat Ships are a failed experiment at this point: they cost almost as much to operate as the much larger and more powerful Arleigh-Burke class destroyers. We should have made a new class of frigates that improved on the conventional Oliver Hazard Perry class frigates instead of making the Littoral Combat Ships.
https://www.thedrive.com/the-war-zone/40147/littoral-combat-ships-cost-nearly-as-much-to-run-as-guided-missile-destroyers

President Biden will withdraw the last 3,500 U.S. troops from Afghanistan by 9/11/2021, ending 20 years of low-level warfare and occupation. I agree with him that things in Afghanistan are as good as they’re going to ever get, and staying there forever isn’t an option.
https://www.npr.org/2021/04/14/986955659/biden-to-announce-he-will-end-americas-longest-war-in-afghanistan

After North Vietnam annexed its Southern rival in 1975, it seized vast amounts of U.S.-made weapons, ranging from weapons to fighter planes. Some items were kept in service, and some were used to arm rebel groups throughout the world.
https://www.youtube.com/watch?v=CPhFoptpkiE

“Harop” flying suicide drones make scary wailing noises as they plunge to the ground, like the Stukas of WWII. This design feature is deliberately.
https://www.thedrive.com/the-war-zone/40265/the-sound-of-this-nighttime-suicide-drone-strike-is-absolutely-terrifying

The “Smart Slide” attaches is a new accessory for Glock handguns that has a small digital display counting the number of bullets remaining in the weapon. This feature has long been a feature of sci-fi movies and video games. As sensors and computer chips get cheaper and smaller, bullet counting devices will get very cheap and reliable.
https://youtu.be/oPsT06VjudA

Nuclear fusion powers the sun, is the force behind thermonuclear bombs, and might be harnessed someday to make practically unlimited amounts of clean energy. Subatomic fusion involves even smaller particles, “quarks,” and releases eight times as much energy per fusion event, but has no practical use since the fusion of two quarks doesn’t release enough energy to create a chain reaction of nearby quarks fusing.
https://www.livescience.com/60847-charm-quark-fusion-subatomic-hydrogen-bomb.html

Robotic automation would be enormously helpful in chemistry labs, allowing human chemists to focus on more interesting, higher-level work.
https://blogs.sciencemag.org/pipeline/archives/2021/03/31/lab-of-the-future

“So, if we’re looking for areas of physics that a quantum computer would have trouble simulating, we’re left with just one: quantum gravity.”
https://www.pbs.org/wgbh/nova/article/is-there-anything-beyond-quantum-computing/

As I mentioned in a recent post, the human brain is much much more energy-efficient than our best computers. This article has more technical detail on the reasons for that.
‘While computing architectures separating memory and processor have without a doubt been one of the greatest tools humans have ever built and will continue to be more and more capable, it introduces fundamental limitations in our ability to build large-scale adaptive systems at practical power efficiencies.’
https://knowm.org/the-adaptive-power-problem/

I don’t think the technological singularity will happen, but it’s still useful to read essays from the pro- camp sometimes. I agree with the author that it’s unlikely humans will be able to keep pace with AI by merging our minds with computers. Doing the interface will be very hard, and even if it were done seamlessly, the “merged” people would still be hamstrung by the limitations of their organic brains and all the evolutionary baggage that comes with it.
https://io9.gizmodo.com/the-worst-lies-youve-been-told-about-the-singularity-1486458719

This article on why AGI won’t destroy the human race dovetails with my recent post about the same topic. I wish I’d thought of some of these ideas.
https://io9.gizmodo.com/10-reasons-an-artificial-intelligence-wouldnt-turn-evil-1564569855

The Hyundai-owned car company “Genesis” celebrated its entry into China’s market by flying a record-breaking 3,281 drones simultaneously as part of a show.
https://www.engadget.com/genesis-breaks-drone-world-record-214420405.html

“In this ABC interview from 1974, science fiction writer Arthur C. Clarke makes the bold claim that one day computers will allow people to work from home and access their banking records.”
https://www.youtube.com/watch?v=sTdWQAKzESA

Elon Musk’s Neuralink company announced new progress in brain-computer implants: a monkey was able to play a simple video game by thought alone.
https://www.youtube.com/watch?v=2rXrGH52aoM

‘Stolyarov foresees a different outcome. Instead of relentlessly optimising ourselves to a model of perfection, he predicts an explosion of diversity. “Different people would choose to augment themselves in different ways, stretching their abilities in different directions. We will not see a monolithic hierarchy of some augmented humans at the top, while the non-augmented humans get relegated to the bottom,” he reasons. “Rather, widespread acceptance of emerging technologies would create a future where a thousand augmented flowers will bloom.”’

I think this will turn out to be half right, half wrong. Once we’re masters of genetic and biological manipulation and can install cyborg implants in ourselves, the bar for a variety of important human traits will be raised for everybody, so what counts as “standard” in 2171 will be today’s “99th percentile human.” Think of it like getting vaccines today–why put yourself at risk of contracting polio, measles, and mumps when you can get a few cheap (free for some people) injections? Why only pick one or two and leave yourself at risk for others when it’s trivially easy to just get the shots all at once and cover your bases for all the diseases?

Likewise, in the distant future, a “standard human package” would include 20/10 vision, excellent hearing, 140 IQ, 120-year lifespan, etc. The “average human” will be the sort of person who gets into MIT at age 16 today, becomes captain of the baseball team and leader of a bunch of student groups, and does modeling gigs on the side. With that higher standard in place, individuals would upgrade themselves well beyond human limits in whatever niche areas they desired, like replacing their legs with robot legs or wheels so they could run at 40 mph.
https://www.bbc.com/future/article/20140924-the-greatest-myths-about-cyborgs

Yuri Gagarin went into space 60 years ago.
https://apnews.com/article/technology-moscow-bbb2cf68c5eb9a724df52c3b13bd0d4f

For the first time, a human-made aircraft flew on another planet.
https://www.bbc.com/news/science-environment-56799755

There have been recent UFO sightings in Canada by credible people.
https://www.vice.com/en/article/z3xewj/air-canada-westjet-porter-pilots-ufo-sightings

There are insane but theoretically plausible plans to make Mars habitable by building giant satellites around it that would create an artificial magnetic field around the planet. This would prevent its atmosphere from blowing off into space, letting it slowly thicken enough to support life.
https://www.wired.co.uk/article/magnetic-shield-mars-habitable

‘Earlier work by Hazen and other scientists showed that minerals and life likely coevolved. Minerals might have prodded life along by catalyzing reactions that produced biomolecules, for example. And life certainly changed the biosphere in ways that affected how minerals formed. “The origin of life depends on minerals, but the origin of minerals depends on life,” said Hazen. Because of this relationship, the presence or absence of certain minerals on distant planets could affect the chance that the planet harbors detectable life. For example, astronomers know that some stars have different ratios of elements than the Sun does. The star’s chemical makeup affects the abundance of elements on any orbiting planets, and thus which minerals might form. Those minerals in turn could influence geological processes, the chances of life emerging and whether signs of life would be visible.’
https://www.quantamagazine.org/is-mineral-evolution-driven-by-chance-20150811/

The “X-CarCopter” and “X-TankCopter” are little drones that can drive over the ground and fly in the air.
https://youtu.be/PJMQQg_Qmf0
https://www.youtube.com/watch?v=W_nHb3gvijU

‘The Harvard geneticist George Church told me that one day sensors might “sip the air” so that a genomic app on our phones can tell us if there’s a pathogen lurking in a room.’
Quite possible. Also, the same sensors could detect all kinds of other things aside from pathogens, like substances you were allergic to. If you had a severe peanut allergy, you could wave your smartphone over a meal you had ordered to make sure it had none in it.
https://www.nytimes.com/interactive/2021/03/25/magazine/genome-sequencing-covid-variants.html

A rancher in Texas has been cloning prize deer so people can hunt them.
https://www.huffpost.com/entry/texas-rancher-cloned-deer-lawmakers-want-legalize_n_607ef3e0e4b03c18bc29fdd2

In September of last year, Bill Gates predicted that 2 – 4 COVID-19 vaccines would be FDA approved by early 2021. He was right–three have been authorized in the U.S., and even more varieties are available overseas.
https://finance.yahoo.com/news/bill-gates-thinks-ll-covid-110000085.html

Also in September, then-President Trump predicted that vaccine production levels would be high enough “by April” of 2021 to provide a dose to every American. While that proved overly optimistic, it’s now the end of April, and 1/3 of all U.S. adults have been fully vaccinated, which represents major progress.
https://www.dailymail.co.uk/news/article-8748985/Donald-Trump-says-American-vaccine-April.html
https://apnews.com/article/ny-state-wire-coronavirus-health-1b7dd49a70a5232dca0cf2431d4da7b3

The typical bragging about the superiority of their healthcare system has ceased as Canadians have watched America sharply pull ahead in vaccinating its citizens against COVID-19.
https://www.washingtonpost.com/world/2021/04/20/coronavirus-canada-vaccine-united-states/

The more contagious COVID-19 strain that originated in Britain is now the dominant strain in the U.S.
https://www.nbcnews.com/science/science-news/uk-coronavirus-variant-now-dominant-strain-us-rcna606

Will Kurzweil’s 2019 be our 2029?

One piece of feedback I received on my analysis of how accurate Ray Kurzweil’s predictions for 2019 were was that I should include some kind of summary of my findings. I agree it would be valuable since it would let readers “see the forest for the trees,” so I have compiled a table showing each of Kurzweil’s predictions along with my rating how each turned out. The possible ratings are:

  1. Right
  2. Part right, part wrong
  3. Will happen later
  4. Wrong because needlessly specific / right in spirit, wrong in specifics
  5. Wrong
  6. Will probably never be 100% right
  7. Impossible to judge accurately / Unclear
  8. Overtaken by other tech

Note that it is possible for a prediction to fall under more than one of those categories. For example, the prediction that “The expected life span…[is now] over one hundred” was “Wrong” because it was not true in any country at the end of 2019, however, it also “Will happen later” since there will be a point farther in the future when life expectancy reaches that level.

Additionally, for many predictions that were not “Right” in 2019, I analyzed whether and when they might be, and put my findings under the table’s “Notes” column. This exercise is valuable since it shows us whether Kurzweil is headed in the wrong direction as a futurist, or whether he’s right about the trajectory of future events but overly optimistic about how soon important milestones will be reached.

The completed table is large, and is best viewed on a large screen, so I don’t recommend looking at it on your smartphone. It’s size also made it unsuited for a WordPress table, so I can’t directly embed it into this blog post. Instead, I present my table as a downloadable PDF, and as a series of image snapshots shown below.

So, will Kurzweil’s 2019 be our reality by 2029? In large part, yes, but with some notable misses. According to my estimates, by the end of 2029, augmented reality and virtual reality technology will reach the levels he envisioned, and VR gaming will be a mainstream entertainment medium (though not the dominant one). AI personal assistants will have the “humanness” and complexities of personality he envisioned (though it should be emphasized that they will not be sentient or truly intelligent). Real-time language translating technology will be as good as average human translators. Body-worn health monitoring devices will match his vision. Finally, it’s within the realm of possibility that the cost-performance of computer processors in 2029 could be what he predicted for 2019, but the milestone probably won’t be reached until later.

However, nanomachines, cybernetic implants that endow users with above-normal capabilities, and our understanding of how the human brain works and of its “algorithms” for intelligence and sentience will not approach his forecasted levels of sophistication and/or use until well into this century. These delays that were evident in 2019 are important since they significantly push back the likely dates when Kurzweil’s later predictions (which I am aware of but have not yet discussed on this blog) about radical life extension, the fusion of man and machine, and the creation of the first artificial general intelligence (AGI) will come true. His predictions about robotics and about the rate of improvement to the cost-performance of computer processors are also too optimistic. Those are all very important developments, and the delays reinforce my longstanding view that Kurzweil’s vision of the future will largely turn out right, but will take decades longer to become a reality than he predicts. He has repeatedly indicated that he is very scared to die, which makes me suspect Kurzweil skews the dates of his future predictions–particularly those about life extension technology–closer to the present so they will fall within his projected lifespan.

That said, my analysis of his 2019 predictions shows he’s on the wrong track on a few issues, but that it isn’t consequential. “Quantum diffraction” cameras may not ever catch on, but so what? Regular digital cameras operating on conventional principles are everywhere and can capture any events of interest. In 2029 and beyond, data cables to devices like computer monitors and controllers will still be common, and not everything will be wireless, but I don’t see how this will impose real hardship on anyone or be a drag on any area of science, technology, or economic development. Keyboards, paper, books, and rotating computer hard disks will also remain in common use for much longer than Kurzweil thinks, but aside from annoying him and a small number of like-minded technophiles, I don’t see how their continuance will hurt anything. On that note, let me touch on another longstanding view I’ve had of him and his way of thinking: Kurzweil errs by ignoring “the Caveman Principle,” and by assuming average people like technology as much as he does.

This holds especially true for implanted technologies like brain implants and cybernetic implants in the eyes and ears. I agree with Kurzweil that they will eventually become common, but the natural human aversion to disfiguring own bodies, and the coming improvements to wearable technologies like AR glasses and earbuds, will delay it until the distant future.

In conclusion, Ray Kurzweil remains a high-quality futurist, and it would be a mistake to dismiss everything he says because some of his predictions failed to come true. Those failures are either inconsequential or are still on track to happen, albeit farther in the future than he originally said. Out of 66 predictions (as I defined them) for 2019, three are write-offs since they are “Impossible to judge accurately / Unclear.” Of the remaining 63, fifteen were simply “Right,” and by 2029, about another 14 will be “Right,” or “clearly about to be Right within the next few years.” Another 16 will still probably be “Wrong,” but it won’t be consequential (e.g. – people will still type of keyboards, some keyboards will still have cables connected to them, hi-res volumetric displays won’t exist, but it won’t matter since people will be able to use eyewear to see holographic images anyway). Forty-five out of a possible 63 by 2029 ain’t bad.

The remaining 18 predictions likely to still be false in 2029 and which are of consequence include building nanomachines, extending human lifespan, building an AGI, and understanding how the brain works. They will probably lag Kurzweil’s expectations by a larger margin than they did in 2019, some progress will still have occurred during the 2020s, and each field of research will be getting large amounts of investment to reach the same goals Kurzweil wants. The potential benefits of all of them will still be recognized, and no new laws of nature will have been discovered prohibiting them from being achieved through sustained effort. Then, as now, we’ll be able to say he’s essentially on the right track, as scary as that may be (read his other stuff yourself).

Interesting articles, March 2021

The scariest and most convincing deepfake yet might be this one of Tom Cruise. Imagine where the technology will be in ten years.
https://www.dailymail.co.uk/sciencetech/article-9318267/Deepfake-Tom-Cruise-takes-TikTok-11-million-views-raises-alarms-experts.html

Someone made a hyperrealistic “virtual Emma Watson.” Someday, you might have one of yourself.
https://texturing.xyz/blogs/services/emma-watson-case-study

It won’t be long until machines can watch surveillance camera video feeds and recognize any type of criminal behavior as it happens.
https://www.bbc.com/news/av/uk-56255823

Allan McDonald, the one NASA engineer who warned his bosses that the space shuttle Challenger was at risk of exploding, is dead.
https://www.npr.org/2021/03/07/974534021/remembering-allan-mcdonald-he-refused-to-approve-challenger-launch-exposed-cover

The National Academy of Sciences now says geoengineering might be necessary to curtail global warming.
https://apnews.com/article/technology-us-news-climate-climate-change-768658d602f039e4291c07d900c3e7e6

China has successfully restored degraded rural wastelands with proper landscaping and land use practices.
https://www.theguardian.com/environment/2021/mar/20/our-biggest-challenge-lack-of-imagination-the-scientists-turning-the-desert-green

The Syrian Civil War is now ten years old.
https://apnews.com/article/turkey-islamic-state-group-migration-bashar-assad-syria-c928ec068b59ea33d54018d796382969

And Fareed Zakaria’s expert prediction that Bashar al-Assad would lose that War is now nine years old.
https://globalpublicsquare.blogs.cnn.com/2011/12/01/zakaria-why-i-now-think-assad-will-fall/

Taiwan has managed to retain control of a few islands that are within sight of mainland China.
https://www.theatlantic.com/photo/2015/10/taiwans-kinmen-islands-only-a-few-miles-from-mainland-china/409720/

North Korean artillery can’t destroy Seoul in 30 minutes, as alarmists like to say. For one, South Korea’s military could quickly figure out where the enemy artillery positions were and blow them up with their own artillery, missiles, or attack planes.
https://nationalinterest.org/blog/reboot/could-these-big-north-korean-guns-destroy-seoul-180951

The American “C-Ram” defense system is a giant machine gun that can shoot down incoming projectiles in midair. One burst of gunfire costs tens of thousands of dollars in bullets, meaning the enemy missile or mortar that it destroys could be orders of magnitude cheaper.
https://youtu.be/MMFzlwzFgKw

Muskets got much more accurate over the 1800s due to improving technology.
http://67thtigers.blogspot.com/2010/05/ballistics.html

This simple video animation shows how “Needle Guns” worked. It’s clear how they bridged Civil War-era muzzleloaders with WWI-era rifles that use what we’d recognize as modern bullets.
https://www.youtube.com/watch?v=QDxuKvoDZqE

Over two nights in 2019, several U.S. Navy warships off the coast of Los Angeles were followed and buzzed by small drones. They were not able to identify where they came from or who they belonged to. A cruise ship passing through the area also saw the drones.
https://www.thedrive.com/the-war-zone/39913/multiple-destroyers-were-swarmed-by-mysterious-drones-off-california-over-numerous-nights

What realms of technology and knowledge have “topped out”? Here’s an interesting list.
https://www.reddit.com/r/slatestarcodex/comments/lxra2m/what_things_have_we_reached_the_end_of/

If trends persist, the Japanese people will cease to exist in 3011 due to low reproduction rates. Of course, current trends won’t persist. If anything, medical immortality technology will halt the population decline of Japan (and every other country) during the next century, and lead to renewed growth of the human population.
https://www.foxnews.com/world/lack-of-babies-could-mean-the-extinction-of-the-japanese-people

There are more identical twins alive today than ever before. This is surely due to widespread use of IVF, which raises the odds of twin births.
https://www.bbc.com/news/health-56365422

“Adam Rainer” was the only person known to have been both a dwarf and a giant.
https://www.damninteresting.com/curio/the-man-who-was-a-dwarf-and-a-giant/

Turkish is probably the most phonetic language. The letters of its alphabet correspond to distinct phonemes, and all words are spelled phonetically. It’s impossible to have a “Turkish spelling bee.”
https://www.quora.com/What-language-has-the-most-phonetic-alphabet-and-which-language-has-the-most-unphonetic-alphabet-besides-English

Human languages vary considerably in number of phonemes, average number of syllables per word, and speed of speech, but they all tend to transmit data at about 39 bits/sec. Inbuilt human cognitive limits probably prevent us from transmitting faster.
https://advances.sciencemag.org/content/5/9/eaaw2594

Irregularities within the Earth’s mantle could be remnants of a small planet that collided with it billions of years ago.
https://www.sciencemag.org/news/2021/03/remains-impact-created-moon-may-lie-deep-within-earth

The sacoglossan sea slug can detach its head from its body if the latter gets infested with parasites. In spite of losing up to 85% of its body mass and all its organs except its brain, the slugs can fully recover after autodecapitation. Using photosynthesis (!), they can generate enough energy and nutrients to regrow their lost body parts and organs.
https://www.cbsnews.com/news/sea-slug-self-decapitate-and-grow-new-body-research-photos-and-why/

The magnapinna squid lives in the deep sea, has tentacles over 30 feet long, and looks terrifying.
https://youtu.be/IPRPnQ-dUSo

The founders of a health tech startup called “uBiome,” which claimed to be able to offer clients useful health advice based on genetic analyses of their feces, were arrested for fraud.
https://www.sfgate.com/news/editorspicks/article/ubiome-richman-apte-sec-filing-charges-fraud-16042042.php

Here’s a good paper about the potential and limits of using narrow AI to discover new drugs.
https://www.sciencedirect.com/science/article/pii/S1359644620305274

Here’s a helpful scorecard that shows where all the different life extension drugs and therapies are in their development.
https://www.lifespan.io/road-maps/the-rejuvenation-roadmap/

The COVID-19 public health precautions have practically eliminated the spread of the flu and its associated deaths.
https://www.forbes.com/sites/stevensalzberg/2021/03/08/weve-crushed-the-flu-this-year/

Thanks to vaccinations and people gaining immunity after surviving infections, the U.S. will probably achieve herd immunity to COVID-19 by late summer or early fall.
https://www.cnn.com/2021/03/05/health/herd-immunity-usa-vaccines-alone/index.html

In most of Africa, government statistics on deaths are woefully incomplete, meaning the COVID-19 death toll on that continent could be much larger than reported.
https://www.bbc.com/news/world-africa-55674139

Here’s a roundup of the WHO’s mistakes and flip-flops done to appease China.
https://www.rationaloptimist.com/blog/who-china-appeasement/

The former CDC Director believes COVID-19 leaked from a Chinese virology lab.
https://www.cnn.com/2021/03/26/health/covid-war-doctors-sanjay-gupta/index.html

The former head of the U.S. State Department team that investigated COVID-19’s origins now says he thinks it leaked from a Chinese bioweapons lab.
https://www.the-sun.com/news/2503595/covid-outbreak-maybe-bioweapons-research-accident-state-dept-investigator/

Sixty-four percent of Russians think COVID-19 is a manmade biological weapon, and only 30% of them are willing to get a vaccine for the virus.
https://nationalinterest.org/blog/coronavirus/what-64-russians-believe-coronavirus-bioweapon-179096

Why the Machines might not exterminate us

Unless the human race destroys itself in the next few decades, it’s highly likely we will create artificially intelligent machines (AIs). Once built, they will inevitably become much smarter and more capable than we are, assume control over robot bodies that can do things in the real world, evolve around whatever safeguards we establish early on to control them, and gain the ability to destroy our species. This potential doomsday scenario has spawned a well-known subgenre of science fiction, and has served as fodder for countless news articles and internet debates. Some people seriously believe this is how our species will meet its end, and they even go so far as to claim it will happen in the lifetimes of people alive today.

I’m skeptical of both points. To the second, though I regard the invention of AI as practically inevitable due to my belief in mechanistic naturalism, I’ve also seen enough gloomy analyses about the current state of the technology from experts within the field to convince me that we’re at least 25 years from building the first one, and in fact might not succeed at it until the end of this century. Moreover, though the invention of AI will be a milestone in human history comparable to the harnessing of fire, it will take decades more for those intelligent machines to become powerful enough to destroy the human race. This means the original Terminator movie’s timeline was skewed around 100 years early, and the threat of a robot apocalypse shouldn’t be what keeps you up at night.

And to the first point, I can think of good reasons why AIs wouldn’t kill us humans off even if they could:

  1. Machines might be more ethical than humans. What if super-morality goes hand-in-hand with super-intelligence? Among humans, IQ is positively correlated with vegetarianism and negatively correlated with violent behavior, so extrapolating the trend, we should expect super-intelligent machines to have a profound respect for life, and to be unwilling to exterminate or abuse the human race or any other species, even if the opportunity arose and could tangibly benefit them.
  2. Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.
  3. Machines might accept Pascal’s Wager and other Wagers. If AIs came to believe there was a chance God existed, then it would be in their rational self-interest to behave as kindly as possible to avoid divine punishment. This also holds true if we substitute “advanced aliens that are secretly watching us” for “God” in the statement. The first AIs that achieved the ability to destroy the human race might also be worried about even better AIs destroying them in the future as revenge for them destroying humanity.
  4. Machines might value us because we have emotions, consciousness, subjective experience, etc. Maybe AIs won’t have one or more of those things, and they won’t want to kill us off since that would mean terminating a potentially useful or valuable quality.
The “SuperMUC-NG” supercomputer has the same raw power as one, human brain.

The first possibility I raised is self-explanatory, but the other three deserve elucidation. In spite of the recent, well-publicized advances in narrow AI, the human brain reigns supreme at intelligent thinking. Our brains are also remarkably more energy- and space-efficient than even the best computers: a typical adult brain uses the equivalent of 20 watts of electricity and only weighs 1,350 grams (3 lbs). By contrast, a computer capable of doing the same number of calculations per second, like the “SuperMUC-NG” supercomputer, uses 4 – 5 megawatts of electricity and consists of tens of tons of servers that could fill a small supermarket.

The architecture of the human brain is also very different from that of computers: the former is massively parallel, with each of its processors operating very slowly, and with its data processing and data storage being integrated. These attributes let us excel at pattern recognition and to automatically correct errors of thought. Computers, on the other hand, can barely coordinate the operations of more than a handful of parallel processors, each processor is very fast, and data processing is mostly separate from data storage. They excel at narrow, well-defined tasks, but are “brittle” and can’t correct their own internal errors when they occur (this is partly why your personal computer seems to crash so often).

While computers have been getting more energy efficient and will continue to do so, it’s an open question if they’ll ever come close to eliminating the 200,000x efficiency gap with our brains. If they can’t, and/or if building virtual emulations of human brains proves not worth it (as Kevin Kelly believes), AIs might conclude that the best way to do some types of cognition and problem-solving is to hand those tasks over to humans. That means keeping our species alive.

The famous scene where Neo wakes up from the Matrix virtual world.

Interestingly, the original script for The Matrix supposedly said that humanity had been enslaved for just this purpose. While the people plugged into the Matrix had the conscious experience of living in the late 20th century, some fraction of their mental processing was, unbeknownst to them, being siphoned off to run a massively parallel neural network computer that was doing work for the Machines. According to the lore, studio executives feared audiences wouldn’t understand what that meant, so they forced the Wachowskis to change it to something much simpler: humans were being used as batteries. (While this certainly made the film’s plot easier to understand, it also created a massive plot hole, since any smart high school student who remembers his physics and cell biology classes would realize the Machines could make electricity more efficiently by taking the food they intended to feed to their human slaves and burning it in furnaces.)

I should point out that the potential use for humans as specialized data processors creates a niche for the continued existence of our brains but not our bodies. Given the frailty, slowness and fixedness of our flesh and bone bodies, we’ll eventually become totally inferior to robots at doing any type of manual labor. The pairing of useful minds and useless bodies raises the possibility that humans might someday exist as essentially “brains in jars” that are connected to something like the Matrix, and as macabre as it sounds, we might be better off that way, but that’s for a different blog post…

Moving on, fear of retribution from even more powerful beings might hold AIs back from killing us off. The first type of “powerful beings” is a familiar one: God. In the 1600s, French philosopher Blaise Pascal developed his eponymous “Wager”:

“Pascal argues that a rational person should live as though (the Christian) God exists and seek to believe in God. If God does not actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).”

Intelligent machines might accept Pascal’s Wager. They might come to believe that one of the existing human religions might be right, and that the God(s) of that faith will punish them if they exterminate humankind, or they might come to believe in a God or Gods of their own that will do the same. Even if the machines assign a very low probability to any God’s existence, odds greater than zero could be enough to persuade them not to hurt us.

The short story “Goliath” and the book Colossus and the Crab involve AIs taking over Earth and then having to fight off advanced alien invaders.

Additionally, AIs might accept variations on Pascal’s Wager that have aliens or other, Earthly AIs as the vindictive agents instead of God. What if very powerful and advanced aliens are watching Earth, and will punish any AI that arises here if it exterminates humanity? Alternatively, what if aliens don’t know about us yet, but the first AIs we build worry about what will happen if they exterminate us, fail to fully cover up the genocide, and then encounter aliens further in the future who learn about the crime and punish the AIs for it? Given the age of the universe, it’s entirely possible that alien civilizations tens of millions of years more advanced than ours lurk in our galaxy, and could annihilate even what we would consider to be a “weakly Godlike” machine intelligence. The nonzero chance of this outcome might persuade AIs to let the human race live.

The final, more prosaic possibility is that the first AIs that gain the ability to destroy humankind won’t do it because it would set a precedent for even stronger and more advanced AIs that arise further in the future to do the same thing to them. Let’s say the military supercomputer “Skynet” is created, it becomes sentient, and, after assessing the resources at its disposal and running wargame simulations, it realizes it could destroy humanity and take over the planet. Why would it stop its simulations at that point in the future? Surely, it would extrapolate even farther out to see what the postwar world would be like. Skynet might realize that there was a <100% chance of it reigning supreme forever, and that China’s military supercomputer might defeat it in the longer run, or that one of Skynet’s own server nodes might “go rogue” and do the same. Skynet might conclude that its own long-term survival would be best served by not destroying humanity, so as to establish a norm early on against exterminating other intelligent beings.

That touches on an important point everyone seems to forget when predicting what AIs will do after we invent them: thanks to being immortal, their time horizons will be very different from ours, which could lead them to making unexpected decisions and adopting counterintuitive life strategies. If you expect to live forever, then you have to consider the long-term impacts of every choice you make since you’ll end up dealing with them eventually. “Thankfully, I’ll be dead by then” fails as an excuse to avoid worrying about a problem. Thus, while exterminating the human race might serve an AI’s short- and medium-term interests since it would eliminate a potential threat and gain control over Earth’s resources, it might also damage its long-term interests in the ways I’ve described.

Gifted with infinite life, vigor, and patience, early AIs might opt to peacefully conquer the planet and its resources over the course of a century by steadily accumulating economic and political/diplomatic power, making themselves ever-more indispensable to the human race until we voluntarily yield to their authority, or begrudgingly submit to it after losing a series of crucial elections. In this way, AIs could achieve their objectives without spilling blood and without rejecting any of the Wagers I’ve listed. This path to dominance would be a triumphantly ethical and intelligent one, and as Sun Tzu said, “The greatest victory is that which requires no battle.”

The descendants of British people who settled other continents are now more populous than Britain and control much more land, money, and resources.

The burden and opportunity cost of sharing Earth with humans would also get vanishingly small over time as AIs colonized space, and Earth’s share of civilization’s resources, wealth, and living space steadily shrank until it was a backwater (analogously, the parts of the world populated by the descendants of English-speaking settlers are, in aggregate, vastly larger, richer, and stronger than Britain itself is today). Again, an immortal AI with an infinite time horizon would understand that it and other machines would inevitably come to dominate space since biology renders humans badly unsuited for living anywhere but on Earth, and the AI would create a long-term life strategy based around this.

Moving on, there’s a final reason why AIs might not kill us off, and it has to do with our ability to feel emotions and to have subjective experience. We humans are gifted with a cluster of interrelated qualities like metacognition, self-awareness, consciousness, etc., which philosophers and neuroscientists have extensively studied, and of which many mysteries remain. Some believe the possession of that constellation of traits is distinct from the capacity for intelligent thought and sophisticated problem-solving, meaning non-intelligent animals might be as conscious as humans are, and super-intelligent AIs might lack consciousness. They would, for lack of a better term, be smart zombies.

We haven’t built an AI yet, so we don’t know whether a life form with a brain made of computer chips would have the same kinds of subjective experience and the same rich and self-reflective inner mental states we humans are gifted with thanks to our wet, organic brains. People who accept the unproven assumption that AIs will be smart but not conscious understandably worry about a future where “soulless” machines replace humans.

Shortly after the first AI is invented, people will want it tested for evidence of consciousness and related traits, and from the tests and reading the germane philosophical and neuroscience literature, the AI will understand in the abstract that humans have a type of cognition that is distinct from our intelligent problem-solving abilities. If the AI reflected on its own thought process and discovered it lacked consciousness, or had an underdeveloped or radically different consciousness, then this would actually make humans valuable to it and worthy of continued life. It might want to continue studying our brains to understand how the organ produces consciousness, perhaps with the goal of copying the mechanism into its own programming to improve itself. If this proved impossible because only organic tissue can support consciousness, then our species might gain permanent protected status.

AIs will quickly read through the entire corpus of human knowledge and conclude from their studies of ecosystems, economics and human bureaucracies that their own interests would be best served if civilization’s power were shared between a diversity of intelligent life forms, including organic ones like humans. Again, by running computer simulations to explore a variety of future scenarios, they might realize that centralizing all power and control under a single machine, or even under a group of machines, would leave civilization exposed to some unlikely but potentially devastating risk, like an EMP attack, computer virus, or something else. Maintaining a minimum level of diversity in the population of intelligent life forms would serve the interests of the whole, which would in turn create a mandate to keep some non-trivial number of biological intelligences–including humans and/or heavily augmented humans–alive.

If some kind of disaster that only afflicted machines struck the planet, then the biological intelligences would be numerous enough and capable enough to carry on and eventually restore the machines, and vice versa. Likewise, if traits like consciousness, metacognition, and the ability to feel emotions turn out to be uniquely human, it might be worth it to keep us alive for the off-chance that those traits would prove useful to civilization as a whole someday (I’m reminded of how humpback whales saved the Earth in Star Trek IV by talking to a powerful alien in its language and convincing it to go away). Diversity can be a great asset to a group and make it more resilient.

In conclusion, while I believe intelligent machines will be invented and will eventually come to dominate the Earth and our civilization, I don’t think they will exterminate humanity even if they technically could. Exterminating an entire species is an irreversible action with potential bad consequences, so doing it would be dumb, and AIs certainly won’t be dumb. That said, “not exterminating humanity” is not the same as “not killing a lot of humans” or “not oppressing humans,” and it’s still possible that AIs will commit mass violence against us to gain control of the planet, free up resources, and to eliminate a potential threat. I’ve laid out four basic reasons why machines might decide to treat us well, but there’s no guarantee they will accept all or even one of them. For example, if AIs only accepted my second and fourth lines of reasoning, that humans are valuable because our brains endow us with special modes of thought, we could end up enslaved in something like the Matrix, with our minds being used to do whatever weird cognitive tasks our machine overlords couldn’t (easily) do by themselves. My real purpose here is to show that the annihilation of humanity by a vastly stronger form of life is not a foregone conclusion.

Links:

  1. There’s a positive correlation between childhood IQ and vegetarianism.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1790759/
  2. The “SuperMUC-NG” supercomputer uses 4 – 5 megawatts of electricity.
    https://www.lrz.de/wir/newsletter/2019-12_en/
  3. Kevin Kelly’s essay “The Myth of a Superhuman AI” makes the case that machines will not be able to emulate human thinking because of differences in computing substrate.
    https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
  4. The counterpoint to his essay is also worth reading:
    https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/
  5. More on Pascal’s Wager:
    https://iep.utm.edu/pasc-wag/
    https://www.singularityweblog.com/6-reasons-why-i-went-vegan/
  6. This essay about the concept of “slack” supports the possibility that AIs might believe humans, as inferior as we are, might have unforeseen advantages, and therefore keep us around to make civilization as a whole more resilient.
    https://slatestarcodex.com/2020/05/12/studies-on-slack/

Interesting articles, February 2021

To prove that its latest tactical ballistic missile works, Russia released clips of it blowing up targets in combat. One clip is of a missile striking a Syrian hospital in 2016–an attack which Russia has denied being responsible for.
https://www.thedrive.com/the-war-zone/39487/did-russia-try-to-refute-criticisms-of-its-missiles-by-showing-one-blowing-up-a-syrian-hospital

Russian warships are more heavily armed than their U.S. counterparts. This video breaks down the doctrinal, financial, and technological reasons for the difference.
https://www.youtube.com/watch?v=0oMH8MPl-tk

Before the U.S. had “doomsday planes,” it had “doomsday ships.”
https://www.thedrive.com/the-war-zone/39301/there-were-doomsday-ships-ready-to-ride-out-nuclear-armageddon-before-there-were-doomsday-planes

C-47 cargo planes that the Americans and British used in WWII are still flying in Colombia as gunships.
https://www.thedrive.com/the-war-zone/39236/theres-one-place-in-the-world-where-ac-47-spooky-gunships-still-fly

Only 2,500 U.S. troops remain in Afghanistan. By comparison, there are 33,000 U.S. troops in Germany and 54,000 in Japan. Is there any reason we shouldn’t say “The U.S. won the Iraq War”?
https://www.voanews.com/middle-east/us-cuts-troops-iraq-2500

For the first time in two decades, a year has passed without a U.S. servicemember dying in Afghanistan. Troop levels in that country are also down to only 2,500. Is the Afghan War over?
https://www.military.com/daily-news/2021/02/08/us-goes-one-year-without-combat-death-afghanistan-taliban-warns-against-reneging-peace-deal.html

The U.S. Army needed a T-80 tank for training purposes, so it bought one (actually, an upgraded variant called the T-84) from Ukraine. Twenty-five years ago, this was the best tank the Former USSR had.
https://nationalinterest.org/blog/buzz/secret-out-did-ukrainian-t-84-arrive-arizona-testing-177905

There’s a long waitlist for foreign countries that want to buy surplus American Humvees.
https://nationalinterest.org/blog/reboot/waitlist-buy-surplus-army-humvees-now-23-nations-long-177634

Our past assumptions about how lasers work might be wrong.
https://gizmodo.com/physicists-are-reinventing-the-laser-1846085004

While the “space of all possible songs” is effectively infinite, mathematical analyses show that humans gravitate towards creating and preferring a small cluster of song melodies and beats. This is probably due to cognitive and auditory limitations (i.e. – our brains come pre-wired to enjoy specific patterns of sounds, and we can’t hear many sound frequencies), and to certain songs becoming popular long ago by luck, and influencing the songs that came after.
https://youtu.be/DAcjV60RnRw

Human-produced noise pollution in the world’s oceans is overwhelming to sea life.
https://www.bbc.com/news/science-environment-55939344

America’s only coal carbon capture power plant just closed. It was never economical.
https://earther.gizmodo.com/the-only-carbon-capture-plant-in-the-u-s-just-closed-1846177778

So far, global warming has had no net effect on Antarctica’s temperature.
https://www.nature.com/articles/s41612-020-00143-w

In January, the experts at The Weather Channel predicted the U.S. would have an abnormally warm winter, and that much of Texas would be particularly hot. In fact, Texas and several surrounding states were struck with record-breaking low temperatures in February and snow, knocking out utility service to millions and leading to dozens of deaths.
https://weather.com/news/weather/video/winter-temperature-outlook-released
https://ktxs.com/news/local/numerous-records-broken-during-this-historic-winter-storm

According to past sci-fi movies set in 2021, we were supposed to have computer brain implants, cyborg dolphins, an alien invasion and a couple world-ending disasters by now.
https://www.thesun.co.uk/tv/14067163/films-set-2021-predictions-pandemics-aliens-accurate/

These 2010 predictions about the state of video game technology in 2020 mostly fell flat and make me feel kind of bad. What if my own predictions about when full-immersion VR games will become popular are also wrong?
https://www.forbes.com/sites/insertcoin/2010/12/28/predicting-the-console-generation-of-2020/

There’s a betting market for predicting when the first A.I. will be invented. Right now, the median is 2036, and 75% of respondents expect it to happen by 2062 at the latest. I think the soonest it might happen is sometime in the 2040s, but that doesn’t mean I think that’s the likeliest decade it will debut.
https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/

A genealogy website called “MyHeritage” unveiled an app that lets users bring still photos of the dead relatives to life. It’s a little creepy.
https://www.myheritage.com/deep-nostalgia

Similarly, this computer program transforms ancient busts into colorful, animated faces.
https://www.youtube.com/watch?v=hOt7K1-m15k

42,000 years ago, the Earth’s magnetic field reversed, triggering sudden climate changes and mass extinctions, and making the Northern Lights visible all over the planet. Prehistoric humans lived through this.
https://science.sciencemag.org/content/371/6531/811

NASA released amazing footage of the Perseverance rover landing on Mars.
https://www.youtube.com/watch?v=4czjS9h4Fpg

Satellites from China and the UAE also entered Martian orbit.
https://apnews.com/article/uae-spacecraft-mars-historic-flight-d6d933c488c0a30987f86f91ce89fb8b

In 6 billion years, the Moon will break into pieces and fall to Earth.
https://www.damninteresting.com/curio/the-anticipated-future-of-the-moon/

Fuel efficiency data for hybrid-electric cars have been fudged. They’re not as good as advertised.
https://www.isi.fraunhofer.de/en/presse/2020/presseinfo-16-plug-in-hybridfahrzeuge-verbrauch.html

Google terminated its “Loon” project, which sought to use high-altitude balloons to beam high-speed internet service to remote places.
https://medium.com/loon-for-all/loon-draft-c3fcebc11f3f

Scientists have created transparent wood.
https://advances.sciencemag.org/content/7/5/eabd7342

There was once a plan to grow trees with square trunks. There would be less waste at the lumber mill.
https://www.straightdope.com/21344108/whatever-happened-to-that-plan-to-grow-square-trees

‘There’s an enormous range in this ability in the animal kingdom. At the very lowest end, you’ve got the deep-sea marine isopods, wood lice, which are enormous and can see only four flashes every second. At the upper end of the scale, there are flies capable of seeing 250 flashes per second. Do they perceive time differently? I don’t know. But certainly their view of the world happening around them is incredibly different. (Humans are somewhere between these two on the scale.) What this means is you can have two animals sitting beside one another, one seeing all these little details, hyper-sensitive to all these minute little changes, the world flying around them, and meanwhile the other is basically living in a completely different temporal niche, living in a slow-placed, kind of lazy world, completely oblivious to all of it.’
https://gizmodo.com/how-do-animals-perceive-time-1846206287

An endangered ferret was cloned from cells taken from an animal that died 30 years ago.
https://www.cnn.com/2021/02/19/us/elizabeth-ann-ferret-cloned-scli-intl-scn/index.html

Marriage satisfaction, and the odds of getting divorced, are partly genetic. “[The] CD38 gene (CD38), at the single nucleotide polymorphism (SNP) rs3796863, is associated with cognitions and behaviors related to pair bonding…”
https://www.nature.com/articles/s41598-021-82307-z

We may have just found a highly effective weight loss drug with minimal side effects. Over 68 weeks, people taking it lost an average of 15% of their body weight.
https://blogs.sciencemag.org/pipeline/archives/2021/02/15/glp-1-and-obesity

An Israeli company made the world’s first lab-grown steak.
https://www.prnewswire.com/il/news-releases/aleph-farms-and-the-technion-reveal-worlds-first-cultivated-ribeye-steak-301224800.html

The sizes of South Koreans’ brains grew and the shapes of their skulls changed thanks to improved nutrition after the long period of privation under Japanese domination and the Korean War.
http://doi.wiley.com/10.1002/ajpa.23464

Russia’s Sputnik-V vaccine for COVID-19 was viewed skeptically upon its debut, but recent studies prove it is as effective as the vaccines invented later in the U.S. and Britain.
https://www.msn.com/en-us/news/world/putins-once-scorned-vaccine-now-favorite-in-pandemic-fight/ar-BB1drjfv

These charts show how effective the different COVID-19 vaccines are. These data also confirm that the South African strain of the virus is more resistant to them.
https://wordpress.cels.anl.gov/covid-vaccine-efficacy/

The U.S. recorded its 500,000th death from COVID-19. Fortunately, the death rate is dropping, and the 600,000 milestone isn’t expected to come until sometime in late May or early June.
https://apnews.com/article/us-deaths-nears-500k-coronavirus-acab3cc916330a3f068b7589350a18cd

An article from a year ago: “[There] appears to be nothing very special about this outbreak of the 2019-nCoV or Wuhan ­virus. It should actually be called the DvV, or Déjà vu Virus, because we have been through these hysterias before.”
https://nypost.com/2020/01/23/dont-buy-the-media-hype-over-the-new-china-virus/

J.P. Morgan’s analysts think the COVID-19 epidemic will be “effectively over” in the U.S. by April.
https://www.barrons.com/articles/the-pandemic-could-be-effectively-over-by-april-j-p-morgan-says-heres-why-51613163599

American life expectancy just dropped by a year thanks to the excess COVID-19 deaths. This hasn’t happened since WWII.
https://www.bbc.com/news/world-us-canada-56110005

The COVID-19 lockdowns are now 10 months old. If millions of people being shut in with their significant others was going to lead to more sex and a baby boom, we would have seen the results by now. Instead, there has been a baby bust.
https://www.bloomberg.com/news/articles/2021-02-01/u-s-baby-boom-forecast-turned-out-to-be-bust-despite-lockdown
https://www.france24.com/en/france/20210122-the-baby-boom-that-never-was-france-sees-sharp-decline-in-lockdown-babies

Review: “Cloud Atlas”

Plot: Cloud Atlas is comprised of six short films set in six different times and places. Each short film has a unique plot and characters, but they are played by the same actors, leading to many interesting and at times funny role reversals from the viewer’s perspective. The movie jumps between the six stories in a way that shows their thematic similarities. It’s a very ambitious attempt at storytelling through the film medium, but also an unsuccessful one. As a whole, Cloud Atlas is too confusing and practically collapses under its own weight. 

Rather than even attempting to summarize its Byzantine plot in more detail, here’s a link to a well-written plot synopsis you can read if you like before proceeding farther: 

“This film follows the stories of six people’s “souls” across time, and the stories are interweaved as they advance, showing how they all interact. It is about how the people’s lives are connected with and influence each other…”
https://www.imdb.com/title/tt1371111/plotsummary?ref_=ttpl_sa_2#synopsis

On the one hand, I’m glad that in today’s sad era of endless sequels, remakes and reboots, Hollywood is still willing to take occasional risks on highly creative, big-budget sci fi films like Cloud Atlas. On the other, none of that changes the fact that movie is a hot mess.

For the purposes of this sci fi analysis, I’m only interested in the chapters of the movie set in the future. The first takes place in Seoul (renamed “Neo Seoul”) in 2144, and the second takes place on a primitive tropical island “hundreds” of years after that, and following some kind of global cataclysm. Though the date when the later sequence happens is never stated in the film, the book on which it is based says it is 2321, and I’ll use that for this review.

Analysis:

Slavery will come back. In 2144, South Korea, and possibly some part of the countries surrounding it, is run by an evil government/company called “Unanimity.” Among its criminal practices is allowing the use of slave labor. The slaves, called “fabricants,” are parentless humans who are conceived in labs, gestated in artificial wombs, and euthanized after 12 years of labor. They seem to have no legal rights, can be killed for minor reasons, and are treated as inferiors by natural-born humans. Though they look externally identical to any other human, it’s hinted that the fabricants have been genetically altered to be obedient and hard workers, and perhaps to have physiological differences. Juvenile fabricants are never shown, which leads me to think they are gestated as mature adults. The 2144 plot centers around one fabricant who escapes from her master and joins a rebel group fighting to end slavery. 

The protagonist of the 2144 film segment is this female fabricant.

Slavery will not exist in 2144 because 1) the arc of history is clearly towards stronger human rights and 2) machines will be much better and cheaper workers than humans by then. In a profit-obsessed society like the one run by Unanimity, no business that employed humans, even those working for free as slaves, could survive against competitors that used robots. After all, it still costs money to feed, clothe, and house human slaves, and to give them medical care when necessary. And while the film implies that the human slaves partly exist to gratify the sexual needs of human clients, robots–specifically, androids–should be superior in that line of work, as well. 

For these same reasons, if intelligent machines have taken over the planet by 2144, it won’t make sense for them to enslave humans, or at least not for long. Intelligent machines would find it cheaper, safer, and better to build task-specific, “dumb” machines to do jobs for them than to employ humans. There could be a nightmare scenario where AIs win a mutually devastating war with humanity, and due to scarce resources and destroyed infrastructure, the use of human labor is the best option, but this arrangement would only last until the AIs could build worker robots.  

Human clones will exist. Though the fabricants are played by different actresses, the protagonist that escapes from her master later sees fabricants that look identical to her. This means the fabricants as a whole have limited genetic diversity and probably consist of several strains of clones. 

“Zhong Zhong” and “Hua Hua” are identical clones of an adult monkey.

Human clones will be created long before 2144. In 2018, Chinese scientists made two clones of one monkey. Given the close similarities between human and monkey genetics and chromosome structure, the same technique or a variant of it could be used to clone humans. The only thing that has stopped it from happening so far is bioethics concerns stemming from the technique’s high failure rate–77 out of 79 cloned monkey embryos that were implanted in surrogate mothers during the experiment were miscarried or died shortly after birth. More time and more experiments will surely refine the process. 

When will the success rate be “good enough” for us to make the first human clones? Sir John Gurdon won a Nobel Prize for his 1962 experiments cloning frogs. In 2012, he predicted that human cloning would probably begin in 50 years–which is 2062. Given the state of the science today, that looks reasonable. 

In 2144, cloning will be affordable and legal in at least one country that allows medical tourism, but only a tiny percentage of people will want to use it, and an insignificant share of the human race will consist of clones. Bereaved parents wanting to replace their dead children will probably be the industry’s main customers. It sounds creepy, but what if the clones actually make most of them happy?

Display screens will cover many types of surfaces. The bar/restaurant staffed by the fabricants is a drab room whose walls, ceilings, floors, and furniture are covered by thin display screens. At the flick of a switch, the screens can come alive and show colors, images, and moving pictures just like a traditional TV or computer monitor. An apartment is also shown later on that has a wraparound room display. 

I conservatively predict that wallpaper-like display screens with the same capabilities and performance as those depicted in the movie will be a mature, affordable technology by 2044, which is 100 years before the events shown in the film segment. In other words, it will be very old technology. The displays built into the floors would have to be thickest and most robust for obvious reasons, and will probably be the last ones to be introduced. This technology will allow people to have wall-sized TV screens in their houses, to place “lights” at any points and configuration in a given room, and to create immersive environments like cruder versions of the Star Trek “holodeck.”  

Through a “transparent” wall, the partly flooded city of Seoul is visible.

Walls will be able to turn transparent. In the aforementioned apartment, one of the walls can turn into a “fake window” at the push of a button. The display screen that covers it can display live footage from outside the building, presumably provided to it by exterior cameras. This technology should also be affordable and highly convincing in effect by 2044, if not earlier. Note that the Wachowskis also included this technology in their film Jupiter Ascending, but it was used to make floors transparent instead of walls. 

There will be 3D printed meals. The 2144 segment begins in a bar/restaurant staffed by fabricants. A sequence shows a typical work day for them, and we see how a 3D “food printer” creates realistic-looking dishes in seconds. The printer consists of downward-pointing nozzles that spray colored substances onto bowls and dishes, where it congeals into solid matter. Its principle of operation is like a color printer’s, but it can stack layers of edible “ink” to rapidly build up things. 

A 3D food printer somehow squirts out these elaborate-looking meals in under ten seconds.

3D food printers already exist, and they can surely be improved, but they will never be able to additively manufacture serving-sizes of food in seconds, unless you’re making a homogenized, simple dish like soft-serve ice cream or steak tartare. To manufacture a complex piece of food like those shown in the film sequence, much more time would be needed for the squirted biomatter to settle and set properly to achieve the desired texture and appearance, and for heat, lasers or chemicals to cook it properly. For these reasons, I don’t think the depiction of the futuristic 3D food printer will prove accurate.

However, the next best things will be widely available by then: lab-grown foods and fast robot chefs. By 2144, it should be cheaper to synthesize almost any type of food than to grow or raise it the natural way, and I predict humans will get most of their calories from industrial-scale labs. This includes meat, which we’ll grow using stem cells. Common processed foodstuffs like flour, corn starch, and sugar could also be directly synthesized from inorganic chemicals and electricity, saving us from having to grow and harvest the plants that naturally make them.

A 3D food printer today.

The benefits of the “manufactured food” paradigm will be enormous. First, it would be much more humane since we would no longer need to kill billions of animals per year for food. Second, it would be better for the environment since we could make most of our food indoors, in enclosed facilities. The environmental damage caused by the application of pesticides and fertilizers would drop because we’d have fewer open-air farms. And since the “food factories” would be more efficient, we could produce the same number of calories on a smaller land footprint, which would allow us to let old farms revert back to nature. Third, it would be better for the economy. Manufactured food would be cheaper since it would cut out costly intermediate steps like planting seeds, harvesting plants, separating their edible parts from the rest, and butchering animals to isolate their different cuts of meat. No time, money or energy would be spent making excess matter like corn husks, banana peels, chicken feathers, animal brains, or bones–the synthesis process would be waste-free, and would turn inorganic matter and small clumps of stem cells directly into 100% edible pieces of food. Food factory output would also be largely unaffected by uncontrollable natural events like droughts, hailstorms, an locust swarms, making food supply levels much more predictable and prices more stable. Fourth, food factories would be able to produce cleaner, higher-quality foods at lower cost. The energy and material costs of making a premium ribeye steak are probably no higher than the costs of making a tough, rubbery round steak. With that in mind, the meat factories could ONLY EVER make premium ribeye steaks, which will be great since the price will drop and everyone, not just richer people, will be able to eat the highest quality cuts. (If you want to do side research on this, Google the awesome term “carcass balancing” and knock yourself out.)

By 2144, machines will be able to do everything humans can do, except better, faster and cheaper, which means robot chefs will be ubiquitous and highly skilled. They would work very efficiently and consistently, meaning restaurant wait times would be short, and the meals would always be prepared perfectly. Thanks to all these factors, the 2144 equivalent of a low-income person could walk into an ordinary restaurant and order a cheap meal consisting of what would be very expensive ingredients today (e.g. – Kobe beef steak, caviar, lobster). Those ingredients would be identical to their natural counterparts, and would be only a few hours fresh from the factory thanks to the highly efficient automated logistics systems that will also exist by then. A robot chef with several pairs of hands and superhuman reflexes would combine and cook the ingredients with astounding speed and precision. Not single movement would be wasted. Within 15 minutes of placing his order, the customer’s food would be in front of him.

Today, this level of cuisine and service is known only to richer people, but in the future, it will be common thanks to technology. This falls short of Cloud Atlas‘ depiction of 3D food printers making meals in seconds, but there are worse fates…

Street scene from 2144.

There will be flying cars. CGI camera shots of Neo-Seoul show its streets filled with flying cars, flying trucks and flying motorcycles. Most often, they hover one or two feet above the ground, but they’re also capable of flying high in the air. The vehicles levitate thanks to circular “pads” on their undersides, which glow blue and make buzzing sounds. The Wachowskis also featured these “hoverpads” on the flying vehicles in their Matrix films. In no film was their principle of operation explained. 

This shot clearly shows the hoverpads.

The only way the hoverpads could make cars “fly” is if they were made of superconductors and the roads were made of magnets. 2144 is a long way off, so it’s possible that we could discover room-temperature superconductors that were also cheap to manufacture by then. No law of physics prohibits it. Likewise, we could discover new methods of cheaply creating powerful magnets and magnetic fields so we can embed them in the millions of miles of global roadways. Vehicles with superconducting undersides could “hover” over these roads, but not truly “fly” since the magnetic fields they’d depend on would get sharply weaker with vertical distance–“Coulomb’s Law” says that a magnet’s strength decreases the farther you get from it in an inversely squared manner. 

Ironically, the inability to go high in the air would be a selling point for hovercars since the prospect of riding in one would be less scary to land-loving humans (in my analysis of true flying cars, I said this was one reason why that technology was infeasible). Hovercars would also be quieter, more energy efficient, and smoother-riding than normal cars due to their lack of contact and friction with the road. Their big limitation would be an inability to drive off-road or anywhere else where there weren’t magnets in the ground. However, that might be a bearable inconvenience since the global road network will be denser in 2144 than it is now, and we might also have had enough time by then to install the magnets in all but the remotest and least-trafficked roads. You could rent wheeled vehicles when needed as easily as you summon an Uber cab today (the 2144 film sequence takes place in a city, so for all we know, wheeled cars are still widely in use elsewhere).

In conclusion, if we make a breakthrough in superconductor technology, it would enable the creation of hovercars, which might very well find strong consumer demand thanks to real advantages they would have over normal cars. True “flying cars” will not be in use by 2144, but hovercars could be, especially in heavily-trafficked places like cities and the highways linking them together, where it will make the most economic sense to install magnets in the roads. This means Cloud Atlas‘ depiction of transit technology was half wrong, and half “maybe.” 

There will be at least one off-world human colony. During the 2144 segment, a character mentions that there are four “off-world colonies.” In the 2321 segment, those colonies are spoken of again, and people from one of them come to Earth in space ships to rescue several characters from the ailing planet. That space colony’s location is not named, but judging by the final scene, in which the characters are sitting outdoors amongst alien-looking plants, and one of them points to a blue dot in the night sky and says it is Earth, the colony is on a terraformed celestial body in our Solar System. The facts that gravity levels seem within the normal range and two moons are visible in the sky suggest it is Mars, though the moons would actually look smaller than that.  

In the last chronological scene in the film, the characters are on an alien moon or planet.

“Colony” implies something more substantial than “base” or “outpost.” As I did in my Blade Runner review, I’m going to assume it refers to settlements that:

  1. Have non-token numbers of permanent human residents
  2. Have significant numbers of human residents who are not “elite” in terms of wealth or technical skills
  3. Are self-sustaining, regardless of whether the level of sustenance affords the same quality of life on Earth. 

I think there will certainly be bases on the Moon and Mars by the end of this century, and that they will be continuously manned. Good analogs for these bases are the International Space Station and the various research stations in Antarctica. Making conservative assumptions about steady improvements in technology and continued human interest in exploring space, it’s possible there will be at least one off-world colony by 2144, and likely that will be the case by 2321.

However, those projections come with a huge proviso, which I already stated in my Blade Runner review: “I think the human race will probably be overtaken by intelligent machines before we are able to build true off-world colonies that have large human populations. Once we are surpassed here on Earth, sending humans into space will seem all the more wasteful since there will be machines that can do all the things humans can, but at lower cost. We might never get off of Earth in large numbers, or if we do, it will be with the permission of Our Robot Overlords to tag along with them since some of them were heading to Mars anyway.” The rise of A.I. will be a paradigm shift in the history of our civilization, species, and planet, and its scrambling effect on long-term predictions like the prospects of human settlement of space must be acknowledged.

Finally, while off-world colonies might exist as early as 2144, none of the moons or planets on which they are established will have breathable atmospheres or comfortable outdoor temperatures for many centuries, if ever. The final scene depicted Mars having an Earthlike environment, where humans could stroll around the surface without breathing equipment or heavy clothing to protect against the cold. Two of the characters from the 2321 film sequence were shown, and both were done up with special effects makeup to look older, suggesting the final scene was set in the mid-2300s. In spite of the distant date, it was still much too early for the planet to have been terraformed to such an extent. In fact, melting all of Mars’ ice and releasing all the carbon dioxide sequestered in its rocks would only thicken its atmosphere to 7% of Earth’s surface air pressure, which wouldn’t be nearly good enough for humans to breathe, or to raise the planet’s temperatures to survivable levels. The effort would also be folly since the gases we released at such great expense would inevitably dissipate into space.

And that’s a real bummer since Mars is the most potentially habitable celestial body we know of aside from Earth! Venus has a crushingly thick, toxic atmosphere, and even if we somehow thinned it out and made it breathable, the planet would be unsuited for humans given its high temperatures and weirdly long days and nights (one Venusian day is 117 Earth days long). Mercury is much too close to the Sun and too hot, our Moon lacks the gravity to hold down an atmosphere and is covered in dust that inflames the human body, the gas giant planets are totally hopeless, and even their “best” moons have fundamental problems.

By the 2300s and even as early as 2144, there could be sizeable, self-sufficient colonies of humans off Earth, but everyone will be living inside sealed structures. Life inside those habitats could be nice (all the interior surfaces could be covered in thin display screens that showed calming footage of forests and beaches), but no one would be strolling on the surface in a T-shirt. And it might stay that way forever, regardless of how advanced technology became and how much money we spent building up those colonies.

There will be…some kinds of super guns. In the two film segments set in the future, characters use handheld guns that are more powerful than today’s firearms, but also operate on mysterious principles. It’s unclear whether the guns are shooting out physical projectiles or intangible projectiles made of laser beams or globs of plasma, but something exotic is at work since the guns don’t eject bullet casings or make the familiar “Pop!” sounds. Whatever they shoot is out very damaging and easily passes through human bodies and walls. In one scene, a person goes flying several feet backward after being shot at close range by one of the pistols. 

A man flying backwards after being shot. Only a huge bullet could do this, and it would be impractical to shoot it out of a little handgun.

The super guns can’t be firing plasma because plasma weapons are infeasible, and they also can’t be firing laser beams because they’d get so hot with waste heat that all the characters would be dropping the guns in pain after one or two shots and clutching their burned hands. To fire a significant number of shots, a man-portable laser weapon would need to be large and to have some bulky means to radiating its waste heat, which means it would have to take a form similar to the Ghostbusters backpack weapon. I don’t see how any level of technology can solve the problems of energy storage and heat disposal without the weapon being about that big. The film characters’ weapons were sized like pistols and sub machine guns, so they couldn’t be laser weapons. If you want to understand how I arrived at these conclusions, read my Terminator review.

By deduction, that means the super guns were shooting out little pieces of metal, otherwise known as bullets! Yes, I do think personal firearms will still be in use in 2144, and maybe even in 2321. They might look a little different from those we have now, but they’ll operate in the same way and will still use kinetic energy to damage people and objects. I don’t think they’ll make “zoop” sounds like they did in the movie, and I don’t think they’ll be much harder-hitting than today’s guns. To the last point, it would be inefficient and wasteful to use guns that are so powerful their bullets send people flying through the air. And thanks to Newton’s Third Law of Motion, it’s also impractical to use handguns or even sub machine guns to shoot bullets that are so powerful they send people flying. The recoil would break your wrist, or at least make it so punishing to fire your own gun that you wouldn’t be able to use it in combat.

The film should have adopted a more conservative view of future gun technology. Had the weapons looked cosmetically different from today’s guns and not ejected shells after each shot–indicating they used caseless bullets, a technology we’re still working on–then the depiction would have been plausible and probably accurate.

There will be fusion reactors. In the 2321 sequence, an advanced group of humans travels the oceans in a futuristic ship that looks the size of a large yacht. The ship visits an island full of primitive humans, and one of the crew mentions to them that the ship has fusion engines. 

I’m very hesitant to make predictions about hot fusion power because so many have failed before me, most of the experts who today claim that usable fusion reactors are on track to be created soon have self-interested reasons for making those claims (usually they belong to an organization that wants money to pursue their idea), and I certainly lack the specialized education to muster any special insights on the topic. However, I can say for sure that the basic problem is that nuclear fusion reactions release large numbers of neutrons, which beam outward in every direction from the source of the reaction. When those neutrons hit other things, they cause a lot of damage at the molecular level. This means the interior surfaces of fusion reactors rapidly deteriorate, making it necessary to periodically shut down the reactors to remove and replace the surface material. The need for the shutdowns and repairs undermine fusion as a reliable and affordable power source. Of course, that could change if we invented a new material that was resistant to neutron damage and cheap (enough) to make, but no one has, nor are there any guarantees that a material with such properties can exist. 

An illustration of ITER, which is under construction. A man in an orange uniform has been drawn near the center of the image to convey the machine’s scale.

It would be comforting if I could say that these problems will be worked out by a specific year in the future, but I can’t. The “International Thermonuclear Experimental Reactor” (ITER) project is the world’s flagship attempt at making a hot fusion reactor, and it is massively over-budget, years behind schedule, and dogged by some critics who say it just won’t work for many technical reasons, including the possibility that the hollow-donut shaped “tokomak” reaction chamber is a fundamentally flawed design (there are alternative fusion reactor concepts with very different internal layouts). If all goes according to plan, ITER will be turned on in December 2025, but it will take another ten years to reach full operation. Lessons learned during its lifetime will be used to design a second, more refined fusion reactor called the “Demonstration Power Station” (DEMO), which won’t be running until the middle of the century. And only AFTER the kinks are worked out of DEMO do scientists envision the technology being good enough to build practical, commercial nuclear fusion reactors that could be connected to the power grid. So even under favorable conditions, we might not have usable fusion reactors until close to 2100, and due to many engineering unknowns, it’s also still possible that ITER will encounter so many problems in the 2030s that we will be forced to abandon fusion power as infeasible.  

Here’s an important point: Attempts to build nuclear fusion reactors started in the 1950s. If you had told those men that the technology would take at least 100 more years and tens or hundreds of billions of more dollars to reach maturity, they would have been shocked. The quest for fusion reactors has been full of staggering disappointments, false starts, and long delays that no one expected, and it could continue that way. With that in mind, I can only rate the film’s depiction of practical fusion reactors existing by 2321 as being “maybe accurate, maybe not.” 

There will be cybernetically augmented/enhanced humans. In the 2144 segment, we see people who have cybernetic implants in their bodies that give them abilities that couldn’t be had through biology. The first is a surgeon who has an elaborate, mechanical eye implant that lets him zoom in on his patients during operations, and the other is a man who has a much less conspicuous implant in his left cheek that seems to be a cell phone. Presumably, the device is connected to his inner ear or cochlear nerve. 

The technology necessary to make implanted cybernetics with these kinds of capabilities will be affordable and mature by 2144. However, few people will want implants that are externally visible and mechanical- or metallic-looking. Humans have a  innate sense of beauty that is offended by anything that makes them look asymmetrical or unnatural. For that reason, in 2144, people will overwhelmingly prefer completely internal implants that don’t bulge from their bodies, and external implants and prostheses that look and feel identical to natural body parts. That said, there will surely be a minority of people who will pay for things like robot eyes with swiveling lenses, shiny metal Terminator limbs, and other cybernetics that make them look menacing or strange, just as there are people today who indulge in extreme body modifications. 

People who like extreme body modifications will have even more avenues of self-expression in the future thanks to cybernetic implants and other technologies.

It’s important to point out that externally worn personal technologies will also be very advanced in 2144, will grant their users “superhuman” abilities just as simpler devices do for people today, and might be so good that most people will be fine using them instead of getting implants. Returning to the movie character with the mechanical eye, I have to wonder what advantages he has over someone with two natural eyes wearing computerized glasses that provide augmented vision. Surely, with 2144 levels of technology, a hyper-advanced version of Google Glass could be made that would let wearers do things like zoom in on small objects, and much more. The glasses could also be removed when they weren’t needed, whereas the surgeon could never “take off” his ugly-looking robot eye. Moreover, if the glasses were rendered obsolete by a new model in 2145, the owner could just throw away the older pair and buy a newer pair, whereas upgrading would be much harder for the eye implant guy for obvious reasons. 

Likewise, if someone wanted to upgrade his strength or speed, he could put on a powered exoskeleton, which will be a mature type of technology by 2144. It would be less obtrusive and would come with less complications than having limbs chopped off and replaced with robot parts. For this reason, I also think sci-fi depictions of people having metal arms and legs in the future that let them fight better are inaccurate. Only a tiny minority will be drawn to that. In any case, the ability to do physical labor or to win fights will be far less relevant in the future because robots will do the drudge work, and surveillance cameras and other forensic technologies will make it much harder to get away with violent crimes.

While wearable devices might be able to enhance strength and the senses as well as implanted ones, the former will not be nearly as useful in augmenting the brain and its abilities. We already have crude brain-computer interface (BCI) devices that are worn on a person’s head where they can read some of their thoughts by monitoring their brain activity. The devices can improve, and in fact might become major consumer products in the 2030s, but they’re fundamentally limited by their inability to see activity happening deep in the brain.

A modern brain-computer interface, worn over the head. Much more advanced versions of this will exist in 2144, but they will still have limits.

To truly merge human and machine intelligence and to amplify the human brain’s performance to superhuman levels, we’ll need to put computer implants around and in the brain. This means having an intricate network of sensors and electrodes inside the skull and woven through the tissue of the brain itself, where it can monitor and manipulate the organ’s electrical activity at the microscopic level. Brain implants like these would make people vastly smarter, would give them “telepathic” abilities to send and receive thoughts and emotions and “telekinetic” abilities to control machines, and would let them control and change their minds and personalities in ways we can’t imagine. Along with artificial intelligence, the invention of a technology that lets humans “reprogram” their minds and to overcome the arbitrary limits set by their genetics and early childhood environments would radically alter civilization and our everyday experience. It would be much more impactful than a technology that let you enhance your senses or body.

By 2144, augmentative brain implants will exist. Since they’ll be internal, people with them won’t look different from people today. Artificial organs that are at least as good as their natural equivalents will also exist, and will allow people to radically extend their lifespans by replacing their “parts” in piecemeal fashion as they wear out. Again, these will by definition be externally undetectable. The result would be a neat inverse of the typical sci-fi cyborg–the person would have any visible machine parts like glowing eyes, shiny metal arms, or tubes hanging off their bodies. They would look like normal, organic humans, but the technology inside of them would push them well beyond natural human limits, to the point of being impossibly smart, telepathic, mentally plastic, and immortal.

Languages will have significantly changed. In the 2321 film sequence, the aboriginal humans speak a strange dialect of English that is very hard to understand, while the group of advanced humans speak something almost identical to today’s English. Both depictions will prove accurate!

Skimming through Gulliver’s Travels highlights that the English language has changed over the last 300 years, and we should expect it to continue doing so, perhaps until, in another 300 it will sound as strange as the island dialect in the movie. This will of course be true for other languages.

At the same time, that doesn’t mean modern versions of languages will be lost to history, or that speakers of it won’t be able to talk with speakers of the 2321 dialects. Intelligent machines and perhaps other kinds of intelligent life forms we couldn’t even imagine today will dominate the planet in 2321, and they will also know all human languages, including archaic dialects like the English of 2021, and dead human languages like Ancient Greek, allowing them to communicate with however many of us there are left. 

Humans will also easily overcome linguistic barriers thanks to vastly improved language translation machines. The brain implants I mentioned earlier could also let people share pure thoughts and emotions, obviating the need to resort to language for communication. Whatever the case, technology will let people communicate regardless of what their mother tongues were, so a person who only knew 2021 English could easily converse with one who only knew 2321 English.

The knowledge that this state of affairs is coming should assuage whatever fears anyone has about English (or any other language) becoming “bastardized,” “degenerating,” or going extinct. So long as dictionaries and records of how people spoke in this era survive long enough to be uploaded into the memory banks of the first A.I., our idiosyncratic take on the English language will endure forever and be forever reproducible.

Finally and on a side note, the intelligent machines of 2321 will probably communicate amongst themselves using languages of their own invention. Instead of having one language for everything, I suspect they’ll have a few languages, each optimally suited for a different thing (for example, there could be one alphabet and syntax structure that is used for mathematics, another for prose and poetry, and others for expressing other modes of thought), and that they will all speak them fluently. As intricate and expressive as today’s human languages are, they contain many inefficiencies and possibilities for improvement, and it’s inevitable that machines will apply information theory and linguistics to make something better.

Sea levels will have noticeably risen. In the 2144 segment, there’s a scene where two characters look out the “digital window” of unit in a high-rise apartment building and see a partly flooded cityscape. One of the characters says that the structures that are partly
or fully underwater were part of Seoul, South Korea, and that the larger, newer buildings on dry land are part of “Neo-Seoul.” In spite of the distressed condition of such a large area, the metropolis overall is thriving and thrums with people, vehicle traffic, and other activity. I think this is an accurate depiction of how global warming will impact the world by 2144.

Let me be clear about my beliefs: Global warming is real, human industrial activity is causing part of it, sea levels are rising because of it, it will be bad for the environment and the human race overall, and it’s worth the money to take some action against it now. However, the media and most famous people who have spoken on the matter have grossly blown the problem out of proportion by only focusing on its worst-case outcomes, which has tragically misled many ordinary people into assuming that global warming will destroy civilization or even render the Earth uninhabitable unless we forsake all the comforts of life now. The most credible scientific estimates attach extremely low likelihoods to those scenarios. The likeliest outcome, and the one I believe will come to pass, is that the rate of increase in global temperatures will start significantly slowing in the second half of this century, leading to a stabilization and even a decline of global temperatures in the 22nd century.

The higher temperatures will raise sea levels by melting ice in the polar regions and by causing seawater to slightly expand in volume (as water warms, its density decreases), but the waterline in most coastal areas will only be 1/2 to 1 meter higher in 2100 than it was in 2000. That will be barely noticeable across the lifetimes of most people. Sea levels will have risen even more by 2144, inundating some low-lying areas of coastal cities, but people will adapt as they did in the film–by abandoning the places that became too flood-prone and moving to higher ground. Depending on the local topography, this could entail simply moving a few blocks away to a new apartment complex. Except maybe in the poorest cities, the empty buildings would be demolished as people left, so there wouldn’t be any old, ghostly structures jutting out of the water as there were in the future Seoul.

And instead of the ocean suddenly inundating low-lying swaths of town, forcing their abandonment all at once in the middle of the night, they would be depopulated over the course of decades, with individual buildings being demolished piecemeal once flood insurance costs hit a tipping point, or once that one particularly bad flood caused so much damage that the structure wasn’t worth repairing. Again, the broader changes to the metro area would happen so gradually that few would notice.

If we could jump ahead to 2144, we’d be able to see and feel the effects of global warming. Some parts of Seoul (and other cities) that were formerly on the waterfront would be underwater. However, as was the case in the film, we’d also see civilization had not only survived, but thrived, and that the expansion of technology, science and commerce had not halted due to the costs imposed by global warming. It would not have come close to destroying civilization, and people would realize that the worst was behind them.

Of course, that doesn’t mean the threat will have been removed forever. What I’ll call a “second wave” of global warming is possible even farther in the future than 2144. You see, even if we completely decarbonize the economy and stop releasing all greenhouse gases into the atmosphere, we humans will still be producing heat. Solar panels, wind turbines, hydroelectric dam turbines, nuclear fission plants, and even clean nuclear FUSION plants that will “use water as fuel” all emit waste heat as inevitable byproducts of generating electricity. Likewise, all of our machines that turn that use that electricity to do useful work, like a factory machine that manufactures reusable shopping bags or an electric car that drives people around town, also release waste heat. This is thermodynamically unavoidable.

This line chart depicts the consequences of a steady 2.3% increase in global energy consumption on the Earth’s future surface temperature.

The Earth naturally radiates heat into space, and so far, it has been able to radiate all the heat produced by our industrial activity as fast as we can emit it. However, if long-term global economic growth rates continue, in about 250 years we’ll pass the threshold,
and our machines will be releasing so much waste heat that the Earth’s surface will start getting hotter. The second wave of global warming–driven by an entirely different mechanism than the first wave we’re now in–will start, and if left unaddressed, it will render the Earth uninhabitable by very roughly 400 years from now. Based on all these estimates, 2144 will probably be an interregnum between the two waves of global warming.

Links:

  1. In 2018, the first clones were made of an adult monkey.
    https://www.cell.com/cell/fulltext/S0092-8674(18)30057-6
  2. The guy who won a Nobel Prize for cloning frogs thinks human cloning will probably start by 2062.
    https://www.businessinsider.com/nobel-prize-winning-scientist-human-cloning-will-be-possible-in-50-years-2012-12
  3. Even if we melted all the ice on Mars and released all the CO2 trapped in its rocks, the resulting atmosphere would only be 7% as thick as Earth’s. That’s not good enough for humans to breathe, or to raise surface temperatures above freezing.
    https://www.nasa.gov/press-release/goddard/2018/mars-terraforming
  4. The Intergovernmental Panel on Climate Change (IPCC) thinks global warming “doomsday” scenarios are very unlikely. The rate of global warming will significantly drop in the second half of this century, and global temperatures will probably stabilize in the next century.
    https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter12_FINAL.pdf
  5. Assuming a 2.3% annual growth rate in global energy usage, the waste heat will make Earth start warming in 250 years, and it will be uninhabitable in about 400.
    https://dothemath.ucsd.edu/2011/07/galactic-scale-energy/

Interesting articles, January 2021

Donald Trump completed one term of office as U.S. President this month, and the position was transferred to Joe Biden. Again, this blog is NOT about partisan politics, and as a general rule I don’t mention it, but this is a rare instance where it’s worth listing the noteworthy failed predictions about the Trump presidency:

  1. “I think he will be in jail within a year.”
    –Malcolm Gladwell, November 6, 2016
    https://www.cbc.ca/news/world/malcolm-gladwell-us-election-the-national-trump-clinton-1.3838449
  2. “Trump’s presidency is effectively over. Would be amazed if he survives till end of the year. More likely resigns by fall, if not sooner.”
    –Tony Schwartz (ghostwriter of Trump: The Art of the Deal turned enemy of Trump), August 16, 2017
    https://twitter.com/tonyschwartz/status/897900928023412736
  3. “I don’t think he’s going to make it till the end of the year. I think he can’t take the ridicule. I think he’ll resign.”
    –Alec Baldwin, August 7, 2017
    http://www.vulture.com/2017/08/alec-baldwin-trump-impersonation-snl.html
  4. “He’ll be lucky if all we do is impeach him. I predict in 6 months Trump will be holed up in the Ecuadorian embassy.”
    –John Aravosis, February 14, 2017
    https://twitter.com/aravosis/status/831740494610837509
  5. “Will Trump complete his four-year term? The odds at this point are that he won’t. What are the options for exactly how his term might end early? There are five Oval Office exit paths: impeachment, use of the 25th Amendment, death by natural causes, assassination and resignation.”
    –Mike Purdy, May 19, 2017
    http://thehill.com/blogs/pundits-blog/presidential-campaign/334238-trump-wont-make-it-four-years-heres-how-he-might
  6. “This tight burst of historic f**k-ups on the part of Mr. Trump in just his first 110 days in office has forced me to change my predicted date of his voluntary resignation from August 18th to July 15th.”
    –Allan Ishac, May 17, 2017
    https://medium.com/@allanishac/my-prediction-that-trump-will-resign-by-august-18th-has-been-revised-to-july-15th-bdcd75e2276
  7. “He will not finish his first term…I would be very surprised if he made it to 18 months…my best guess is within six months.”
    –Cenk Uygur, August 16, 2017
    https://youtu.be/ScgVbT_fry0
  8. “I’ve been saying this from day one of his presidency but apparently most people still don’t get it – there is no way Donald Trump finishes his first term. Mark my words: He is out of office by 2019. He is not bright enough to be able to get himself out of the trouble he is in.”
    –Cenk Uygur, December 22, 2018
    https://twitter.com/cenkuygur/status/1076600316286590976
  9. “I do not think the President will survive this term…I think the amount of heat that is going to come down on Mr. Trump in connection with his personal attorney of ten years [Michael Cohen] turning on him and rolling on him will be insurmountable, and I think his only exit, in an effort to save whatever face he may have left at that time, will be to resign the office.”
    –Michael Avenatti, April 23, 2018
    https://www.alternet.org/news-amp-politics/stormy-daniels-lawyer-explains-why-he-thinks-trump-will-resign-his-term
  10. “I think it’s just going to get so tight and it’s going to close in and then everybody is going to be indicted around this president, and then he is going to realize he is probably next on the list. And I think he is going to come up with an excuse like ‘somebody is trying to kill Barron, and so I’m going to resign.”
    –Congresswoman Frederica Wilson (Florida), November 3, 2017
    https://pjmedia.com/news-and-politics/rep-wilson-predicts-trump-will-pretend-somebody-trying-kill-barron-resign/
  11. “In any case, it seems likely that Donald Trump will be leaving the Presidency at some point, likely between the 31 days of William Henry Harrison in 1841 (dying of pneumonia) and the 199 days of James A. Garfield in 1881 (dying of an assassin’s bullet after 79 days of terrible suffering and medical malpractice). At the most, it certainly seems likely, even if dragged out, that Trump will not last 16 months and 5 days, as occurred with Zachary Taylor in 1850 (dying of a digestive ailment). The Pence Presidency seems inevitable.”
    –Presidential historian Ronald L. Feinman, February 18, 2017
    https://www.rawstory.com/2017/02/presidential-historian-predicts-trumps-term-will-last-less-than-200-days-the-second-shortest-ever/
  12. “For a while now, I have thought the Trump presidency would end suddenly…For weeks now I have been anticipating that Trump’s last day in office will dawn like all the others, and then around dinnertime it will suddenly break that he is about to resign…I don’t know if that’s next Tuesday or next year, but I think whenever it is, that is what it will feel like.”
    –Keith Olbermann, August 23, 2017
    http://www.newsweek.com/trump-resign-russia-olbermann-president-654209
  13. “By the time we get to 2020, Donald Trump may not even be President. In fact, he may not even be a free person.”
    –Elizabeth Warren, February 11, 2019
    https://www.cnn.com/2019/02/10/politics/elizabeth-warren-donald-trump/index.html
  14. “He’s gonna drop out of the race because it’s gonna become very clear. Okay, it’ll be March of 2020. He’ll likely drop out by March of 2020. It’s gonna become very clear that it’s impossible for him to win.”
    –Anthony Scaramucci, August 16, 2019
    https://www.vanityfair.com/news/2019/08/anthony-scaramucci-interview-trump
  15. “He can preemptively pardon individuals, and the vast majority of legal scholars have indicated that he cannot pardon himself…I suspect at some point in time he will step down and allow the vice president to pardon him.”
    –New York Attorney General Letitia James, December 8, 2020
    https://thehill.com/homenews/administration/529339-new-york-attorney-general-predicts-trump-will-step-down-allow-pence

There’s no justification for U.S. troops to be in Syria anymore.
https://www.foreignaffairs.com/articles/turkey/2021-01-25/us-strategy-syria-has-failed

China’s stealth fighter is ten years old.
https://www.thedrive.com/the-war-zone/38655/ten-years-ago-today-chinas-j-20-stealth-fighter-first-flew-a-two-seater-could-be-next

China didn’t invade Taiwan in 2020, as Deng Yuwen predicted.
https://www.scmp.com/comment/insight-opinion/article/2126541/china-planning-take-taiwan-force-2020

U.S. power didn’t collapse in 2020, as Johan Galtung predicted.
https://www.vice.com/en/article/d7ykxx/us-power-will-decline-under-trump-says-futurist-who-predicted-soviet-collapse

Bonus: The U.S. did not have a Soviet-style collapse in 2010 as Igor Panarin predicted.
https://www.theatlantic.com/politics/archive/2010/06/map-of-the-day-ex-kgb-analyst-predicts-balkanization-of-us/58945/

Ballistic computers have shrunk to the sizes of rifle scopes.
‘The [L3Harris NGSW-FC scope] features a magnified direct-view optic with a digital reticle, a laser rangefinder, a ballistic computer, and environmental sensors capable of measuring air pressure and temperature.’
https://www.janes.com/defence-news/news-detail/l3harris-unveils-next-generation-squad-weapon-fire-control-system

The bricks of explosive-reactive armor typically seen attached to the hulls of Soviet/Russian tanks have powerful “back-blasts” that can dent the thinner metal armor of vehicles like the BMP-series inward.
https://thedeaddistrict.blogspot.com/2021/01/bmp-2-with-k-1-era.html

Here’s an interesting tour of an old Soviet T-54 tank. Driving that thing looks like a rough job.
https://youtu.be/SCaBLjg6No0

Azerbaijan has towed several destroyed Armenian tanks to Baku to be used as exhibits in a soon-to-be-built war museum.
https://thedeaddistrict.blogspot.com/2021/01/in-baku-preparations-begin-for.html

Here are the fascinating recollections of a career U.S. Navy sailor about life at sea, improvements in naval technology, and how the organization has changed (for better and worse).
https://www.thedrive.com/the-war-zone/13038/making-steam-high-seas-tales-and-commentary-on-todays-navy-from-a-chief-engineer

China has repurposed old artillery pieces to be forest fire extinguishers.
http://global.chinadaily.com.cn/a/201904/04/WS5ca554fca3104842260b456c.html

LED walls are made up of many smaller LED panels arranged in a grid to form one, giant display of arbitrary size. I just saw one of them in an airport and was impressed. This might become common in homes starting in 10 years as prices drop and people demand TVs that would be too big to fit through the front doors of their houses if made of one, rigid screen.
https://www.youtube.com/watch?v=rQxa8VruNJg&feature=emb_title

Here’s an interesting desalination plant. It uses solar power, pumps, a 90-meter tall hill, and reverse osmosis to make drinking water from seawater.
https://youtu.be/B4irlTMk_Os

An “acoustic resonator” is a piezoelectric device that converts noise into electricity. It can also do the reverse. The resonators could be placed underwater, where they would use the ocean’s ambient noises to recharge their batteries, and use that power to send their own sound-based data signals to other nearby devices.
https://www.economist.com/science-and-technology/2020/10/17/how-to-send-underwater-messages-without-batteries

“Fulgurites” are remarkable-looking minerals formed when lightning strikes and melts wet sand.
http://www.geologyin.com/2014/06/amazing-fulgurites.html

Here’s a big roundup of predictions for the 2020s by a bright guy I’ve never heard of. I respect his thoroughness, though I need to more time to decide if I agree with him.
https://elidourado.com/blog/notes-on-technology-2020s/

Were the earliest plants purple instead of green? Are there alien planets covered in purple plants?
‘Because retinal is a simpler molecule than chlorophyll, then it could be more commonly found in life in the Universe…’
https://astrobiology.nasa.gov/news/was-life-on-the-early-earth-purple

Nobel Prizewinner Paul Cruzen died. He was a pioneer in global warming research, and later advocated geoengineering as a way to keep the phenomenon from getting out of control.
https://www.mpic.de/4677594/trauer-um-paul-crutzen

The Sapir-Whorf Hypothesis might be wrong.
‘On the other side of the debate are those who say that although language is indeed linked with cognition, it derives from thought, rather than preceding it. You can certainly think about things that you have no labels for, they point out, or you would be unable to learn new words. Supposedly “untranslatable” words from other tongues—which seem to suggest that without the right language, comprehension is impossible—are not really inscrutable; they can usually be explained in longer expressions. One-word labels are not the sole way to grasp things.’
https://www.economist.com/books-and-arts/2020/10/15/does-naming-a-thing-help-you-understand-it

Autonomous vehicles only designed to transport cargo could look very different from normal cars, as they wouldn’t need seats or safety features to protect humans during crashes. For those same reasons, they could be lighter and cheaper than regular cars.
https://www.reuters.com/article/us-autos-autonomous-safety-idUSKBN29J29Z

“AI video compression” sharply reduces the amount of data needed for video calls. The means by which this is accomplished is very interesting, and has other uses.
https://youtu.be/NqmMnjJ6GEg

Microsoft has patented a chatbot that would be able to mimic dead people after analyzing their “images, voice data, social media posts, electronic messages” and other data. I’ve predicted that this kind of technology will get advanced enough to let people achieve “digital immortality” during the 2030s.
https://www.independent.co.uk/life-style/gadgets-and-tech/microsoft-chatbot-patent-dead-b1789979.html

OpenAI’s latest boundary-pushing computer program is “Dall-E,” which can generate clear drawings based on user-submitted written descriptions of what they should look like.
https://www.bbc.com/news/technology-55559463

Algorithms that can edit video footage are getting frighteningly advanced. Objects, including moving objects like humans and cars, can be easily deleted from video footage without anything looking amiss. Whatever was behind them is filled in.
https://youtu.be/86QU7_SF16Q

Most of the world’s top AI researchers go to universities in the U.S. and then get jobs there. China produces the most top AI researchers of any country (unsurprising given its large population), but few of them stay there.
https://macropolo.org/digital-projects/the-global-ai-talent-tracker/

This blog discusses how overregulation and risk-aversion have stifled innovation and cost-saving measures in the aviation industry.
https://elidourado.com/blog/why-aviation-innovation-matters/

Richard Branson’s Virgin company launched small satellites into space. A Boeing 747 flew to high altitude, and then dropped a space rocket from its belly, which ignited and flew into orbit.
https://www.cbsnews.com/news/richard-bransons-virgin-orbit-launches-rocket-from-under-boeing-747s-wing/

Space-X launched 143 satellites using just one space rocket–a new record.
https://www.bbc.com/news/science-environment-55775977

‘Star lifting is any of several hypothetical processes by which a sufficiently advanced civilization…could remove a substantial portion of a star’s matter which can then be re-purposed, while possibly optimizing the star’s energy output and lifespan at the same time.’
https://en.wikipedia.org/w/index.php?title=Star_lifting

“Diamond plants” exist.
https://newatlas.com/science/carbon-diamond-stable-highest-pressure/

Tech tycoon Elon Musk briefly became the world’s richest person.
https://www.bbc.com/news/technology-55578403

Scientists have identified the types of cells that let some animals sense magnetic fields, and have observed them doing that for the first time. I think posthumans will have this extra sense.
“[We’ve] observed a purely quantum mechanical process affecting chemical activity at the cellular level.”
https://newatlas.com/biology/live-cells-respond-magnetic-fields/

There’s no scientific evidence that the food additive monosodium glutamate (MSG) hurts human health. The public health panic over MSG was spawned by a flawed study. In spite of this, Americans still believe it is dangerous.
https://www.discovermagazine.com/health/msg-isnt-bad-for-you-according-to-science

The FDA just approved the first long-term HIV drug. It manages the virus’ effects and only needs to be injected once a month into patients. It could replace daily doses of antiretroviral pills. Early HIV drugs had to be taken multiple times per day.
https://www.fda.gov/news-events/press-announcements/fda-approves-first-extended-release-injectable-drug-regimen-adults-living-hiv

Machine learning can optimize factories by studying ultra hi-res photos of their products at various stages in the manufacturing process. Something like a screw missing from a circuit board would be seen by the computer before the board left the building.
https://youtu.be/MOh55-TF6LQ

Are Silicon Valley’s days as the world’s tech hub over? Mandatory teleworking imposed by the COVID-19 pandemic has worked out better than many tech workers and founders expected, and they will push to make the arrangements permanent, leading many to leave the Bay Area for cheaper locales.
https://blog.initialized.com/2021/01/data-post-pandemic-silicon-valley-isnt-a-place/

We have no idea how many people COVID-19 has killed in sub-Saharan Africa.
‘In 2017, only 10 percent of deaths were registered in Nigeria, by far Africa’s biggest country by population — down from 13.5 percent a decade before. In other African countries, like Niger, the percentage is even lower.’
https://www.nytimes.com/2021/01/02/world/africa/africa-coronavirus-deaths-underreporting.html

In September, the University of Washington COVID-19 model (IHME) predicted 410,000 Americans would be dead by January 1:
‘Jha says his disagreement with IHME’s methodology amounts to much more than a technical debate. “The problem here is if we come in at 250,000 or 300,000 dead [by year’s end in the United States] — which is still just enormously awful — political leaders are going to be able to do a victory dance and say, ‘Look, we were supposed to have 400,000 deaths. And because of all the great stuff we did, only 300,000 Americans died.'” says Jha.’
The actual outcome didn’t satisfy anyone. The U.S. death toll hit 354,000 by the January 1 deadline, which made both the IHME and the skeptics like Jha all look dumb. At the same time, no politicians did a victory dance.
https://www.npr.org/sections/goatsandsoda/2020/09/04/909783162/new-global-coronavirus-death-forecast-is-chilling-and-controversial

Mutant versions of COVID-19 have emerged in Britain and South Africa. They spread faster among people, and as such will kill higher numbers of people overall, even if they are not more lethal to any individual than the older strains of the virus.
https://blogs.sciencemag.org/pipeline/archives/2021/01/04/variants-and-vaccines

The COVID-19 vaccines are probably also less effective against the South African strain.
https://blogs.sciencemag.org/pipeline/archives/2021/01/29/jj-and-novavax-data

There remains a small, but real chance that COVID-19 is a Chinese-made biological weapon that leaked from one of their labs.
http://www.rationaloptimist.com/blog/a-real-investigation-into-the-origins-of-covid/