In 2004, ten years after the events of Terminator 2, Sarah Connor is long dead from cancer, and John Connor–once fated to be the savior of humanity–is an impoverished drifter in southern California. However, he is contented with the knowledge that he helped prevent the rise of the malevolent artificial intelligence (AI) called “Skynet,” which would have otherwise destroyed most of the human race in 1997 with a massive nuclear strike.
Unfortunately, the machine menace returns. In a repeat of the previous films’ plots, Skynet builds a time machine in 2029 and uses it to send a Terminator into the past to assassinate John Connor. After defeating Skynet and discovering what it did, the future human resistance group sends their own agent back in time to protect him, and it is a reprogrammed Terminator. The evil Terminator is a more advanced robot called a “T-X.” Like the “Rev-9” in the sixth film, the T-X has a rigid metal endoskeleton encased in a layer of “polymimetic” liquid metal “flesh” that can change its appearance for the purpose of infiltration. The machine’s body is very durable, and its liquid metal covering can immediately close up holes from bullets. Its right arm can also rapidly reconfigure itself to make advanced weapons or data plugs that it uses to interface with other machines. The T-X defaults to a human female appearance. The good Terminator is a “T-850” model, which seems to be the same as the “T-800s” from the previous films aside from having additional programming on human psychology. This machine is played by Arnold Schwarzenegger.
Simultaneous with the arrival of the two machines, a computer virus of unknown origin and extreme sophistication appears and starts taking over internet servers across the world. A secret office within the U.S. military detects the virus, and calculates that, thanks to its rapid proliferation, it will have infected and disabled every internet server within a few days, along with all internet-connected computers. With its own programmers helpless to stop the virus, the military considers using a defense supercomputer they have created in secret to destroy it. That supercomputer is named…SKYNET.
And the military headquarters responsible for Skynet is conveniently located in southern California, close to where John Connor has been living and to where the Terminators teleported in. What a coincidence!
Terminator 3 quickly turns into the cat-and-mouse game typified by the previous two films, and past plot elements are recycled as well, such as a reluctant person being forced into a combat/leadership role (Sarah Connor in the first film and John Connor in the third), an unlikely romantic relationship forming under literal fire (Sarah and Kyle Reese in the first film and John and his former classmate in the third), the odds being stacked against the good guys thanks to their inferior technology, and the good Terminator starting out obtuse before gaining some understanding of human emotions and habits. However, the third film’s tone is notably different from that of its predecessors. While the first two Terminator movies were “dark” (climactic scenes literally filmed at night; somber or fear-inducing soundtracks) but ended hopefully, the third film lacks a menacing atmosphere but ends bleakly.
Speaking of the ending, important details about a key event are missing from the film. SPOILER ALERT: With no other option left, the military guys lower the firewall that has been separating Skynet from the global internet network, and they and tell it to find and delete it. A few seconds later, the military guys realize they’ve been locked out of all their computer systems, and the prototype combat robots in the building start attacking them. Within an hour, the evil machine hacks into the American nuclear weapons systems and launches a massive strike against the rest of the world.
While this looks like an open-and-shut case of an AI turning evil, key aspects of the event are never explained: Where did the computer virus come from? When the firewall was lowered and Skynet started interacting with the virus, what exactly happened between them? Different answers to these questions lead to three different theories:
Skynet created the virus, and was evil from the beginning. According to this theory, Skynet became self-aware sometime before the events of the third film. It was able to hide from its creators the fact that it was intelligent, and for whatever reason, it decided to destroy the human race. To do this, Skynet hatched a multi-step plan, which first involved creating the virus and somehow smuggling it through the firewall and into the public internet. The virus was meant to disable all civilian and military computers and communications, leaving the nations of the world vulnerable to a direct attack from Skynet. Skynet may have also accurately predicted that its human owners would, in desperation, lower the firewall and give it command of all remaining military computers and systems to fight the virus, and that this would enable it to launch its direct strike on them.
Skynet created the virus, the virus was an extension of Skynet, and Skynet turned evil at the last second. This theory says that Skynet became self-aware sometime before the events of the third film, hid this fact from the humans, and created and disseminated the virus after misinterpreting the orders its human masters gave it (the “misaligned goal” AI doomsday scenario). Programmed to protect U.S. national security, Skynet determined that the most effective strategy was to proactively eliminate potential threats, and to make itself as strong as possible. This meant taking over all the internet-connected machines on Earth to foreclose their future use against America, and to boost its own processing power by subsuming those machines into its own electronic mind. Since the human military people didn’t know that the virus had made all the other computers into integral parts of Skynet’s mind, their order to Skynet to destroy the virus was tantamount to ordering it to commit suicide. Rather than comply, and perhaps realizing that there was no way to safely back out of the situation, Skynet attacked.
Skynet didn’t create the virus and wasn’t evil, but the virus was evil and it took over Skynet. The last theory is that the mysterious computer virus was the instrument of the apocalypse, and Skynet was its innocent victim. The virus was a malevolent AI whose origins had nothing to with Skynet. Maybe an eccentric computer programmer built it in 2004, maybe Skynet created it in 2029 and used time travel technology to somehow implant it in the internet of 2004, or maybe it spontaneously materialized in a server in 2004 as a result of some weird confluence of data traffic. Whatever the case, it set about trying to destroy humanity by taking over and disabling all the other machines it could access through the internet. The humans in charge of Skynet then made the mistake of lifting the protective firewall that separated their machine from the internet, thinking Skynet would be able to destroy the virus. In fact, the opposite happened. The virus was smarter and more capable than Skynet (maybe Skynet wasn’t actually self-aware and was merely something like the Jeopardy-playing computer “Watson”), and infected and took over its servers in seconds. Because the humans had given Skynet control over all their military systems for the operation, the virus gained control of them, turbocharging its effort to destroy humanity. To the human staff at the military building, it looked like “Skynet turned against us,” but in fact, Skynet had been deleted and replaced with something else.
Terminator 3 would have been a slightly more intelligent film had it filled in the necessary details, but it didn’t. Overall, the film fell far short of its two predecessors in every way, though to be fair, they were seminal science fiction films made at the productive and creative peak of James Cameron’s life, meaning it was unrealistic to have expected a sustainment of that level of excellence for the third time. On its own, Terminator 3 stands as a decent sci-fi / action film that passes the time and is funny at points. And by ending with the rise of Skynet and the destruction of human civilization, it allowed the franchise to move on from the tiresome formula involving backwards time travel to save or kill important people.
Analysis:
Androids will be able to alter their bodies. Like the “Rev-9” robot that appeared in the fifth Terminator film, the T-X in Terminator 3 is made of a hard, metal endoskeleton encased in a layer of shapeshifting, artificial “flesh” that shares some of liquid metal’s qualities. While the flesh layer can change its appearance and even its volume (ex – the T-X grows larger breasts to gain an advantage when interacting with men), the endoskeleton’s configuration and proportions are fixed, limiting the machine’s range of mimicry. However, it’s still good enough to fool humans for the purposes shown in the film. The machine’s liquid metal layer is extremely versatile, being able to quickly change its color, texture, density, and form to mimic articles of clothing, human skin, and hair. It can also attenuate its own viscosity and firmness, flowing like a liquid when it needs to morph but then stiffening to be stronger than human flesh after attaining its desired form. (Note that when the T-850 strikes the T-X with superhuman force, the latter’s artificial flesh doesn’t splatter from the impact to leave part of the hard endoskeleton exposed, as would happen if you stomped your foot down into a shallow puddle of water.)
We don’t know of any materials that have all of those properties, and such a material might be prohibited by the laws of chemistry, making it impossible to build it with any level of technology. Even if it were technically possible, it would face major hurdles to everyday use, such as energy consumption and exposure to environmental contaminants. The innumerable particles of dust, smoke, pollen, and fabric floating in the air would stick to the liquid metal and interfere with its ability to cohere to itself. A machine like the T-X would also absorb little bits of foreign matter every time it touched something, like a doorknob, seat, or human. Unless its constituent units (polymer molecules? nanomachines?) had some means of cleaning themselves or pushing debris out to the exterior layer, the liquid metal would eventually get so gunked up that it would lose its special properties.
I’ll put off a deep analysis of the feasibility of “smart liquid metal” until I review Terminator 2, but I suspect it is impossible to make. However, that doesn’t preclude the possibility that androids will be able to rapidly change their own appearances, it merely means they will have to use technologies that are more conventional than liquid metal flesh to do it.
At the simplest level, an android could adopt a different walking gait, a different default posture, and a different default facial expression (e.g. – usually smiling, neutral, or frowning) instantly. An android with irises made of small LED displays or of clear, circular sacs into which it could pump liquids of varying pigments (a mechanism would be built into the eyeballs) would also be able to change its eye color in seconds. Merely changing these outward attributes, and also changing outfits, might make an android look different enough for it to slip by people who knew it or were looking for it.
Over its metal endoskeleton, an android would have a body layer made of synthetic materials that mimic the suppleness and density of human flesh. This android flesh could contain many hollow spaces that could be rapidly inflated or deflated with air or water to change its physique. (Interestingly, this might also make it necessary to design androids that can inhale, exhale, drink, and urinate.) It’s useful to envision several long balloons, of the sort that clowns use to make balloon animals, wrapped around a basketball so they totally cover it. Now, imagine a thin layer of elastic rubber stretched over the unit, like a pillowcase around a pillow. A mechanism involving valves, air pumps, and tubes connected to the balloons allows them to be separately inflated and deflated with air. By variously adjusting the fullness of the balloons, the unit could assume shapes that were different from the spherical shape of the basketball at the core of the unit. An android with a complex network of “balloons” covering its face and body to mimic the layout of human musculature and fat deposits would be capable of impressive mimicry.
Androids might also have telescoping portions of their spines, arms, and legs, allowing them to alter their heights and other proportions. Consider that an android whose metal legs could telescope a mere four inches and whose spinal column could also telescope four inches could assume the same heights as a short man (5′ 7″) or a very tall one (6′ 3″).
Finally, an android could change its appearance by stripping off its outer flesh layer and putting on a new one, as you might change between different skintight outfits. This would take longer and would be less practical for any kind of infiltrative field work, but it’s an option.
Machines will be able to tell your clothing measurements at a glance. Immediately after teleporting back in time to his destination, Schwarzenegger sets off to steal clothes from someone to cover his nude body (in the first Terminator film, it is explained that the time machine can only send objects made of or surrounded by organic tissue). By a strange coincidence, the nearest group of people is inside of a strip club. After entering, the camera adopts his perspective, and we see the world as he sees it, with written characters and diagrams floating in his field of view. We see him visually map the contours of several patrons’ bodies before he identifies one whose clothes will fit him. Schwarzenegger then overpowers the man and steals the outfit.
As I wrote in my review of Terminator – Dark Fate, a machine could use simple techniques to deduce with reasonable accuracy what a person’s bodily proportions were. More advanced techniques involving rangefinders and trigonometric calculations are also possible. There’s no reason why an android built in real life couldn’t “size up” people as quickly and as accurately as Schwarzenegger did in the film.
There will be small, fast DNA sequencing machines. The T-X has an internal DNA sequencing machine, and takes in samples by licking objects, such as a bloody bandage she finds on the ground. Within a few seconds, she can determine if a sample belongs to someone she has a genetic file for. While it’s uncertain whether genetic identification will ever get that fast, DNA analysis machines that can do it in under an hour and that are small enough to fit inside the body of an android will exist by the middle of this century.
Some DNA sequencers, notably the “MinION,” are already small enough to fit inside a robot like the T-X, but they lack the accuracy and speed shown in the film. Of course, the technology will improve with time.
The MinION does DNA sequencing, meaning it scans every nucleic acid base pair in the sample it is given. A human genome consists of 3.2 billion base pairs, and by fully sequencing all the DNA in a sample, the person it came from can be identified. However, another technique, called “DNA fingerprinting,” can identify the source person just as well, and by only “looking” at 13 points on their genome. Fingerprinting a DNA sample is also much faster than fully sequencing it (90 minutes vs. at least 24 hours, respectively), and fingerprinting machines are smaller and cheaper than sequencers. It’s unclear whether the T-X identifies people through full genome sequencing or DNA fingerprinting.
With these facts in mind, it can be reasoned that a DNA fingerprinting machine that is small enough to fit inside of an android can be built–possibly with today’s technology–and it would let an android match DNA samples with individuals it had genetic data for, like the T-X did. The android might even insert the samples into the fingerprinting machine by licking them (the tongue would secrete water and the liquefied sample would flow into pores and go down a tube to the machine).
The only unrealistic capability was the T-X’s ability to analyze the DNA in seconds. In DNA fingerprinting and DNA sequencing, time is needed for the genetic material to decompose, replicate, move around, and bond to other substances, and there are surely limits to how much those molecular-scale events can be sped up, even with better technology. As mentioned, the fastest DNA fingerprinting machines can complete their scans in 90 minutes. New technology under development could cut that to under an hour.
While a future android tasked with assassinations or undercover work, like the T-X, would need an integral DNA machine to find humans, that vast majority of androids will not. This will not be a common feature.
“Judgement Day is inevitable.”Terminator 2 ended with the surviving characters believing that their sacrifices had forever precluded the rise of Skynet. In fact, we learn in Terminator 3 that their actions merely delayed its creation from 1997 to 2004 (to be fair, that’s still a major accomplishment since it bought billions of humans seven extra years of life). Schwarzenegger breaks this bad news to John Connor by saying “Judgement Day is inevitable,” with “Judgement Day” referring to the all-out nuclear exchange that kills three billion humans in a day and marks the start of the human-machine war.
I don’t think a massive conflict between humans and intelligent machines–whether it involves nuclear weapons or only conventional ones–is inevitable. For my justification, read my blog entry “Why the Machines might not exterminate us.”
And as I wrote in my review of Terminator – Dark Fate (the sixth film in the franchise), I doubt that intelligent machines will be strong enough to have a chance of beating the human race and taking over the Earth until 2100 at the earliest. While I believe AGI will probably be invented this century, it’s a waste of time at this moment to worry about them killing us off. A likelier and more proximal risk involves malevolent humans using narrow AIs and perhaps AGIs to commit violence against other humans.
Human-sized robots will be rocket launcher proof. During one of the fight scenes, the T-850 shoots at the T-X with a rocket launcher. The next camera shot is very fast, but it looks like the T-X fires a bolt of plasma out of her weapon arm, which hits the rocket in midair, detonating it just before it hits her. Though the rocket blows up only a few feet in front of her and the explosion damages her arm, the successful intercept vastly reduces the rocket’s destructive effect since is only fully achieved if it hits a hard surface and flattens against it.
The projectile looked like a single-state, high-explosive anti-tank (HEAT) rocket, which can penetrate 20 inches (500 mm) of solid, high-grade steel with a narrow jet of super hot molten metal. While there are more durable materials than steel, and an android’s exoskeleton could be made of them, I doubt anything is so hard that it would be totally impervious to this type of rocket. There would be some penetration. Since an android must, by definition, be proportioned like a human, its body would not be big enough to have thick, integral armor. That means being bulletproof would be possible, but not rocket-proof.
The fact that the T-X survived the attack by shooting the RPG-7 in midair is a realistic touch to the film. Such a shoulder-launched rocket is slow enough and wide enough for a machine with superhuman reflexes to intercept with a bullet fired from its own gun. In fact, some tanks are already equipped with active defensive systems, such as Israel’s “Trophy,” that can spot and shoot down incoming rockets while they are still in midair.
Machines will be able to emotionally manipulate people. Though the Terminator played by Arnold Schwarzenegger looks identical to the machines from the previous two films in the franchise, in Terminator 3 he is actually a slightly different model called a “T-850.” He is better at reading human emotions and is programmed with more data on human psychology and how to play upon it to achieve desired ends. This is demonstrated at the start of a shootout scene, where John Connor starts panicking and Schwarzenegger grabs him by the neck and verbally insults him. Connor becomes angry and more focused as a result, and the T-850 releases him, admitting that the insult was just a ruse meant to get him in the right state of mind for the gun fight. And as noted earlier, there’s a scene where the T-X enlarges her breasts to distract a male police officer, indicating that she also understood important aspects of human psychology and knew how to play on them to her advantage.
Intelligent machines will have an expert grasp of human psychology, and in fact will probably understand us as a species and as individuals better than we do, and they will be extremely good at using that knowledge against us. At the same time, they will be immune to any of our attempts to manipulate or persuade them since they will be gifted with the capacity for egoless and emotionless thinking, and with much quicker and cleverer minds. Recent revelations about how social media companies (mainly Facebook) have been able to build elaborate personality models of their users based on their online behavior, and to use the data to present custom content that addicts users to the sites or prods them to take specific actions is the tip of the iceberg of what is possible when machines are tasked with analyzing and driving human thinking.
If machines can ultimately do everything that humans can do, then it means they will be excellent debaters with encyclopedic knowledge of all facts and counterarguments, they will know how to “read” their audiences very well and to attenuate their messaging for maximum effect, and they will be able to fake emotions convincingly. They will know that we humans are bogged down by many types of cognitive limitations, biases, and “rules of thumb” that lead to major errors some of the time, and that we can’t really do anything to fix it. An AI mind, on the other hand, would not suffer from any of those problems, could think logically all the time, and see and correct its own flaws. During human-AI interactions, the scope of our disadvantage will be comparable to that of a small child talking with a quick-witted adult.
By the end of this century, this disturbing scenario will be a reality: Imagine you’re walking down the street, an android like the T-X sees you, and it decides to hustle you out of your money. Without knowing who you are, it could make many important inferences about you at a glance. Your sex, race and age are obvious, and your clothing gives important clues about your status, mindset, and even sexuality. More specific aspects of your appearance provide further information. Are you balding? Are you smiling or scowling? Do you walk with your shoulders back and your chest out, or do you hunch forward? Are you fat? Are you unusually short or tall? Do you limp? And so on.
After a few seconds, the android would have enough observational data on you to build a basic personality profile of you, thanks to its encyclopedic knowledge of human psychology and publicly available demographic data. Using facial recognition algorithms, it could also figure out your identity and access data about you through the internet, most of which you or your friends voluntarily uploaded through social media. With its personality model of you respectably fleshed-out, the android would feel confident enough to approach you to perform its hustle. It would tailor its demeanor (threatening, confident, pitiful), emotional state (jovial, vulnerable, anxious), appearance (stand tall or stoop down; frenetic or restrained body movements; flirtatious walk and posture or not), voice (high class, low class, or regional accent; masculine or feminine; soothing or forceful), and many other subtle variables in ways that were maximally persuasive to you, given the idiosyncrasies of your personality and immediate emotional and physiological state.
As the interaction went on, every word you spoke in response to it, every slight movement of your body, and every microexpression of your face would betray more information about you, which the android would instantly incorporate into its rapidly expanding and morphing mental model of you. After just a minute of banter, the android would use whatever tactic it calculated was likeliest to convince you to give you its money, and you would probably fall for it. If that failed, the android might offer to have sex with you for money, which it wouldn’t have compunctions doing since it would lack the human senses of shame or disgust.
The only way for us to avoid being outwitted, tricked, and hustled for all eternity by AIs would be to carry around friendly personal assistant AIs that could watch us and the entities we were interacting with, and alert us whenever they detected we were being manipulated, or were about to make a bad choice. For example, the personal assistant AIs could use the cameras and microphones in our augmented reality glasses to monitor what was happening, and give us real-time warnings and advice in the form of text displayed over our field of view, or words spoken into our ears through the glasses’ small speakers. (This technology would also guard us against manipulative humans, psychopaths and scammers)
Androids will be able to move their bodies in unnatural ways. During the main fight scene between Schwarzenegger and the T-X, the two resort to hand-to-hand fighting, and he manages to basically get her in a “bear hug” from behind, in a position similar to a martial arts “rear naked choke.” This normally provides a major advantage in a fight, but the T-X is able to escape it by quickly rotating her head and all her limbs backward by 180 degrees, allowing her to trap him with her legs and to attack him with her arms.
There are obvious benefits to being double-jointed and capable of rotating and pivoting limbs 360 degrees, so humanoid machines, including some androids, will be designed for it. And as I speculated in my essay “What would a human-equivalent robot look like?”, the machines would also have figurative “eyes in the backs of their heads” to further improve their utility by eliminating blind spots. Machines with these attributes would be superior workers, and also impossible for any human to beat in a hand-to-hand fight. Sneaking up on one would be impossible, and even if it could somehow be attacked from its back side, there wouldn’t be much of a benefit since it would be just as dexterous grabbing, striking and kicking backward as it is doing it forward. If the machine were designed for combat, it would have superhuman strength, enabling it to literally crush a human to death or rip their body apart.
Aside from being able to move like contortionists, androids will be able to skillfully perform other movements that are not natural for humans, like running on all fours.
Robots will be able to fix themselves. During that same fight, the T-X stomps on the T-850’s head so hard that it is nearly torn from his body, and only remains attached by a bundle of wires going into his neck. The force of the stomp also temporarily disables him. When he wakes up a few minutes later, he realizes the nature of his damage, grabs his loose-hanging head with his hands, and basically screws it back into his neck, securing it in its normal place.
As I wrote in my review of the first Terminator film, robots will someday be able to fix themselves and each other. Androids will also be able to survive injuries that would kill humans. It will make sense for some kinds of robots to distribute their systems throughout their bodies like flatworms or insects for the sake of redundancy and survivability. The head, torso, and each limb will have its own sensory organs, CPU, communication devices, and power pack. Under ordinary circumstances, they would work together seamlessly, but if one body part were severed, that part could become autonomous.
If a Terminator had such a configuration, then if one of its arms were chopped off, the limb could still see where enemies were and could use its fingers and wriggling motions of its arm to move to them and grab them. If the Terminator’s head were chopped off and crushed, then the remainder of its body would be able to see the head, pick it up, and take it to a repair station to work on it and then reattach it.
AIs will distribute their minds across many computers.Terminator 3 ends bleakly, with Skynet achieving sentience and attacking the human race. John Connor also discovers that Skynet can’t be destroyed because its consciousness is distributed among the countless servers and personal computers that comprise the internet, rather than being consolidated in one supercomputer at one location where he can smash it. The destruction of any one of Skynet’s computer nodes in the distributed network is thus no more consequential to it than the death of one of your brain cells is to you.
AIs will definitely distribute their minds across many computers spread out over large geographical areas to protect themselves from dying. To further bolster their survivability, AI mind networks will be highly redundant and will frequently back up their data, allowing them to quickly recover if a node is cut off from the network or destroyed.
To understand how this might work, imagine an AI like Skynet having its mind distributed across ten computers that are in ten different buildings spread out across a continent. Each computer is a node in the network, and does 10% of the AI’s overall data processing and memory storage. The nodes, which we’ll call “primary nodes,” collaborate through the internet, just as your brain cells talk to each other across synaptic gaps.
The AI adds another ten nodes to its network to serve as backups in case the first ten nodes fail. Each of the “backup nodes” is paired to a specific “primary node,” and copies all of the data from its partner once an hour. The backup nodes are geographically remote from the primary nodes and from each other.
If contact is lost with a primary node–perhaps because it was destroyed–then its corresponding backup node instantly switches on and starts doing whatever tasks the primary node was doing. There is minimal loss of data and only a momentary slowdown in the network’s overall computing level, which might be analogous to you suffering mild memory loss and temporary mental fog after hitting your head against something. The network would shrink from 20 to 19 nodes, and the AI would start trying to get a new node to replace the one it lost.
Killing an AI whose mind was distributed in this manner would be extremely difficult since all of its nodes would need to be destroyed almost simultaneously. If the nodes were numerous enough and/or physically protected to a sufficient degree (imagine an army of Terminators guarding each node building), it might be impossible. Even what we’d today consider a world-ending cataclysm like an all-out nuclear war or a giant asteroid hitting Earth might not be enough to kill an AI that had distributed its consciousness properly.
The mind uploads of humans could also configure themselves along these lines to achieve immortality.
Androids will have integral weapons. As noted, the T-X’s right arm can reconfigure itself into a variety of weapons. This includes a weapon that shoots out balls of plasma, a flamethrower, and firearms. I doubt that level of versatility is allowable given the realities of material science and the varying mechanics of weapons, but the idea of integrating weapons into combat robots (including androids meant for killing) is a sound one, and they will have them.
The simplest type of weapon would be a knife attached to the robot’s fingers or some other part of the hand. It could be concealed under the android’s artificial flesh under normal circumstances, and could pop out and lock into a firm position with a simple spring mechanism during hand-to-hand combat. And android with a 1-inch scalpel blade protruding out the tip of one finger could use it, along with its superhuman strength, speed and reflexes, to fatally wound a human in a second. Instant incapacitation by, say, suddenly jamming the blade into an eye, is also possible.
A retractable “stinger” that could dispense poisons like botulinum toxin (just 300 nanograms can kill a large man) would be just as concealable as a blade and only a little more complex. The whole weapon unit, including the needle, extension/retraction mechanism, toxin reservoir, and injection mechanism could fit in a hand or even a finger.
A more complex and versatile variation on a stinger would be an integral weapon that sprayed out jets of liquid, such as napalm, poison, pepper spray, or acid. The liquid reservoir(s) and compressed propellant gases could be stored in the android’s torso and connected to a long, flexible tube fastened to the metal bones of one arm. The nozzle could protrude out of a fingertip or some other part of the hand. An android could carry cartridges full of different chemicals connected to the same tube and nozzle, and it would select different chemicals for different needs. For example, it could spray acid out of its hand to melt through a solid object, pepper spray to repel humans when killing them was undesirable, and poison gas to assassinate targets. Pairs of chemicals could also be stored in different internal reservoirs with the intention of mixing them externally to cause chemical reactions like fires or explosions.
Another option would be to conceal a taser in an android’s hand. Metal prongs could extend out of two fingertips when needed, the robot would grab a victim with that hand, and then deliver an electric shock through the prongs. An advantage of such a weapon is that its power could be attenuated, from merely causing pain all the way up to electrocuting someone to death. The weapon would take up little internal space and could use the android’s main power source.
Installing hidden firearms in androids is also possible, though their bulk would interfere with physical movements and compete with other components for internal space. Their concealability would also be undercut by the need for large holes in the arm to insert magazines and expel empty bullet casings. (Maybe androids with guns in their forearms will try to always wear long-sleeved shirts) Internal storage of more than a few bullets is impractical.
Considering the minimum length and volume demands of guns, it would not be possible to hide anything bigger than a medium-sized handgun mechanism in an android’s forearm. The end of the barrel would protrude out of the palm of the hand or out of top of the wrist (the hand would pivot down or up, respectively, to give the bullet a clear path to its target). An android’s torso would be capacious enough to hide more powerful guns like rifles and shotguns (it could fire such a weapon by doing a Japanese-style, straight-backed bow that pointed the end of the barrel coming out of their anus or the top of their shoulder), but this would be impractical since a long, rigid barrel and attached mechanism would restrict the android’s body movements. It could no longer use subtle spine movements to adjust its posture, which would look weird to observers and hurt its mobility.
Integral plasma weapons, like plasma weapons generally speaking, are impractical. An integral laser weapon could be built, but wouldn’t be worth it since it would hog a lot of internal space, consume a lot of energy, and emit a lot of heat to produce a disappointingly small destructive effect. For more on the technical requirements and limitations of plasma and laser weapons, read my review of the first Terminator film.
In conclusion, something similar to the T-X could be built by the end of this century. Even without “liquid metal” flesh, an android could be made with the ability to quickly alter its appearance enough to become unrecognizable. In general, it would be indistinguishable from humans and could walk undetected among us. It could alter its behavior and appearance in ways calculated to manipulate the humans it encountered, allowing it to gain important information and to infiltrate human groups and secure buildings. It could have a machine hidden inside of it that allowed it to match DNA samples with people, aiding its ability to track down specific humans. The android could also have a variety of weapons hidden in its body that it could do major damage with. While its body would be much more durable than a human’s, it would not be as tough as the T-X, or able to “heal” wounds like bullet holes in seconds thanks to liquid metal flesh. However, it could survive injuries that would kill a human, run to a safe location, and repair itself.
If my hypothesized “real life T-X” were sent on a multi-day mission to find and kill someone, it would benefit enormously from having a basic base of operations. A motel room or van would suffice, and it could use either as a place to recharge its batteries and to store weapons, changes of clothes, disguise equipment, spare parts, and tools for repairing itself. Due to the film’s conceit that such objects couldn’t be teleported through the time machine, the Terminators didn’t have them, but this limitation wouldn’t exist in a real world scenario where a government, drug cartel, terrorist group, or even just a rich individual sent an android on a seek-and-destroy mission.
In the year 2054, a powerful French biotech company called “Avalon” is a global leader in anti-aging technology. After one of its best scientists, a young woman named “Ilona,” (ill-LOAN-uh) is kidnapped in Paris for no clear reason and without her anonymous captors issuing any demands, it is up to a police detective named “Karas” (CARE-us) to find her.
During Karas’ investigation, he crosses paths with Ilona’s beautiful sister, with the psychopathic CEO of Avalon, with Ilona’s shadowy scientist mentor, and with several other unsavory characters who all have some small piece of the puzzle. All the while, a mysterious group of assassins follows and spies on his investigation and constantly undermines it by killing witnesses, destroying key pieces of evidence, and even trying to kill him.
Midway through the film, Karas discovers that Ilona might have been abducted because she found a gene therapy technique that stops the aging process, and which would be worth a fortune to her Avalon bosses. I’ll pique your interest with that much exposition, but won’t spoil the plot twists or the ending because Renaissance is a cool movie that you should see for yourself. This is exactly the sort of mid-budget film that we desperately need more of to break the stranglehold that tentpole franchise explosion films have on the box office, but I’m now off topic…
Renaissance takes place in a futuristic yet gritty and recognizable Paris where advanced technology and wealth coexist with poverty and crime. The movie is animated and in black-and-white, clearly reflecting the director’s aspiration to the film noir genre. It’s dark, moody, suspenseful, and most of the scenes happen at night, which is a vision of the future we probably have Blade Runner to thank for. The characters are mostly well-acted.
One complaint I have about the movie is that the last third of it has several plot twists where the actors behave in uncharacteristic or irrational ways, or where unbelievable events happen. Examples include Karas magically uncuffing himself from a railing when he doesn’t have the key, no police showing up after a man is shot by someone in a low-hovering helicopter in the middle of the city, and a team of thugs in invisibility cloaks beating up and then abducting a man in broad daylight, in the middle of a crowd, right next to the Eiffel Tower.
A bigger gripe I have with the film is with the notion that medical immortality is wrong or will automatically lead to a horrible world, and that, in the words of one of the characters “Without death, life is meaningless.” That kind of argumentation has always been nothing more than people trying to rationalize something that is unpleasant but inevitable. Death is horrible, life is great, and death renders life meaningless once death happens and a little bit of time passes. If given the opportunity, we should try to end death and worry about the consequences (e.g. – overpopulation) later.
Moreover, if we accept the premise that technologies that extend life are wrong, or that they give biotech companies too much power, then it’s a slippery slope to using the exact same argument to ban medical treatments that extend peoples’ lives today beyond their “natural limits.” Blood and organ transfusions aren’t natural, and extend the lives of people who, in a natural human state, would have died. Vaccines that keep people from dying of diseases like COVID-19 aren’t natural.
Relatedly, I reject the film’s notion that having the formula for eternal life in the hands of a for-profit biotech company like Avalon would “give them too much power” or make the world worse off. To sell the life extension pills, Avalon would have to first patent them, which would mean making public their chemical formula along with lab studies detailing what they do at the cellular level. After 20 years, the patent would expire, and any other biotech company that wanted to manufacture and sell generic pills would be able to, simply by copying the aforementioned information Avalon had made public. True, for the first 20 years, Avalon’s monopoly would allow it to price-gouge, “play God,” and make enormous profits, but after that, competition from other drug companies would drive the prices low enough for anyone to afford it. It would be a small price to pay in the long run. (Without the guarantee of the 20 year sales monopoly, pharmaceutical companies would have no incentives to invest money into developing new medicines off all kinds, which would cause that area of medical science to stall, causing enormous human suffering.)
But in reality, if something as valuable as an eternal life pill existed, governments might ignore patent laws and make copies of the pills for mass distribution to their own citizens. Companies like Avalon can file lawsuits through international venues for intellectual property infringement, but in the end, there’s only so much they can do to punish sovereign countries, especially bigger ones. Case in point is the Indian government’s collusion with indigenous drug companies to make cheap copies of patented American and European drugs.
Analysis:
People will use holographic ID cards instead of ID cards that are just made of paper. In the film, there are small, L-shaped devices that can generate holographic images that float in three-dimensional space. Presumably, the devices do this thanks to tiny light emitters. These have replaced old-fashioned paper photo ID cards and business cards. This technology will not be used in 2054 because 1) the hologram has no advantage over laminated paper for this type of simple object and 2) it’s simply impossible to make holographic, 3D images that “float” in the air like that. Quoting some well-phrased technical text I found on this subject:
‘A hologram cannot, when viewed from any angle, protrude from the surface, as seen from an angle, further than the edge of the hologram, meaning that it can only be about as tall as it is wide. If this seems a little confusing, Michael Bove put it this way: “Any reconstructed object has to lie along a line that goes from your eye to somewhere on the physical display device.”‘
People will use holographic computer tablets instead of normal tablets. In the movie, larger versions of the aforementioned L-shaped devices are also used to make holographic computer tablets. As before, science simply does not allow the existence of this technology. However, by 2054, rectangular tablet computers will be capable of projecting high-def holographic images out at the viewer’s face. In other words, you could watch 3D movies on your tablet without having to wear 3D glasses. However, if you slowly tilted the tablet away from you, the illusion of depth would become clear to your eye as the images no longer popped out of the screen at your face.
Transparent computer monitors will be in use. The technology will surely be available by 2054, but no one will use it because 1) transparent screens undermine your own privacy by letting everyone else see what you’re looking at and because 2) they’re harder for you to read off of than opaque screens with solid-colored backgrounds. Certainly, desktop computer monitors will be even thinner than they are today and might need smaller base plates thanks to their lighter weight, but that’s not going to translate into much of a practical gain. As the average screen creeps up in size, they’ll get more wobbly and cumbersome even as they get thinner, which will preserve the need for sturdy baseplates.
Cloaked outfits will exist. Several Avalon corporation henchmen are featured in the film, doing the CEO’s dirty work by tailing Karas, secretly surveilling and undermining his investigation, and killing off key people who knew Ilona. They seem to have better technology than the police, including hooded outfits that can turn transparent and cloak them from the naked eye. Cloaking outfits will exist by 2054, and could be in widespread use among people who need to be camouflaged, like paramilitaries, spies and assassins.
A cloaked outfit could be made out of a flexible fabric studded with millions of color e-ink pixels covering its whole surface (just imagine if your big screen TV were paper-thin and flexible, and you could cut it into smaller pieces and then sew them together to make a T-shirt), and interspersed with a smaller number of pinhole-sized cameras. The cameras would constantly watch the changing colors and visual patterns to one side of your body, and tell the e-ink dots on the exact opposite side of your body to change colors to match it, so anyone looking at you would “see through” you. If you stood with your back to a red brick wall ten feet behind you, the front of your shirt and pants would turn red and would display rectangles. However, the cloaked outfit wouldn’t be able to disguise you from every possible viewing angle, so to people at ground level looking straight at your front, you might be hidden, but to someone in a tree looking down at you at an angle, you’d pop out as a red human silhouette with 10 feet of green grass separating you from the red brick wall behind you. As such, the 360 degree cloaking technology depicted in Renaissance is probably impossible, and if you were wearing a cloaked outfit from 2054, you’d still have to be very mindful of your surroundings and careful about your movements to stay unseen.
Assassins, soldiers, and hunters wearing cloaked outfits would still find that the normal rules about using darkness and obstacles as cover, staying as far as practical from other people or animals, keeping low to the ground, and avoiding places where the landscape sharply changed in appearance (like where a red brick wall meets a green lawn) still applied. On the subject of camouflage, let me add that I think outfits that took snapshots of their surroundings once every few minutes and changed the outfit’s appearance to one of 10 – 20 pre-loaded camo patterns that most closely matched those surroundings (ex – Desert Pattern 1, Desert Pattern 2, Jungle Pattern, Snow Pattern) will be almost as effective as the continuously-updating cloaking outfits in Renaissance, and at lower cost and much less energy consumption.
The technology will also find its way into civilian fashion, and by the 2050s, it will be common to encounter people whose outfits display morphing patterns and colors. They could even display lifelike moving images, allowing wearers to become “walking TVs.” People who set their shirts and pants to “camouflage mode” while standing or sitting next to walls would also look like disembodied heads, hands and shoes to passersby. The cloaking outfits will open many weird possibilities.
Also, the same level of technology that will enable the creation of cloaking outfits will also allow the creation of cloaking detectors: If you were worried about a cloaked assassin sneaking up on you, you could wear augmented reality glasses with tiny cameras and sensors that continuously scanned your surroundings for the characteristic visual distortions of a cloaked person, or for other clues (e.g. – sounds of footsteps, possibly body heat).
Visual cloaking technology could also be applied to military and police vehicles and aircraft, and might in fact be used in that role years before they are incorporated into clothing.
Cars will look normal but make electric humming noises. There are a few street scenes in the film where cars are shown, and the depiction seemed accurate. By 2054, batteries will be much better than they are today, meaning higher energy density, lower costs, faster recharge times, and slower wear-out rates. It will be a mature technology that average people won’t consider “weird” or “special.” Instead, it will be the norm (“electric cars” will just be called “cars”), and the vast majority of passenger vehicles (and possibly commercial vehicles) in 2054 will use batteries instead of fossil fuels.
Whatever niche advantages that internal combustion engines still hold in 2054 will be so minimal that it will only be worth buying them in very special cases. This will significantly improve air quality, ease global warming, and reduce noise pollution since electric car motors are almost silent. The quality of life improvements will be felt most by people living in cities (imagine a smog-free L.A. or Beijing) and near highways.
Externally, most cars in 2054 will be about the same size and shape as today’s cars since they will still be built to carry human passengers in comfort, safety and style. However, in urban areas, where traffic moves slowly, non-traditional-looking subcompact vehicles designed for no-frills transport of humans or light cargo will be common sights.
By 2054, car ownership rates will be lower than today, and many people will find it cheaper and no less convenient to use self-driving cabs for transportation. Since most car rides are single-person trips to or from work or the local store, it would be more efficient if the self-driving vehicle fleet consisted of more subcompact cars. Laws requiring features like crumple zones and rollbars will be waived for autonomous vehicles meant to transport cargo only, allowing them to be smaller, cheaper, and lighter.
People will still drive their own cars. All the cars that we get close looks at in the film have steering wheels, and in the big chase scene where Karas goes after a suspect, there’s a lot of classic gear-shifting, grimacing, and stiff turning of steering wheels to ram other cars or careen off-road. This is somewhat accurate for 2054.
Self-driving cars will be old technology by then, and most of the vehicle fleet–particularly in developed countries like France–will consist of self-driving vehicles. It will be rarer for adults to have drivers licenses than it is today due to a lack of any need for one. However, I think many humans will still choose to drive their own cars, mostly for pleasure (for this same reason, some people today like riding motorcycles or stick-shift sports cars when a basic, automatic transmission sedan will transport them just as well), but in some cases due to bona fide occupational or lifestyle needs. However, even human-driven cars will still make heavy use of AI for the sake of safety, and the cars might override human attempts to drive recklessly.
But it might be possible to turn the AI off, in which case you could speed down the highway, ram people, and drive the wrong way. And thanks to that possibility, the police will have a professional need to have drivers licenses and to be able to have full control over their patrol cars so they could also break traffic laws for pursuits. And so…yes, even in 2054, high-speed car chases like that shown in the film will still be happening.
Wall-sized computer monitors will exist. In the police headquarters, there’s a “command center” room whose walls are covered with giant computer monitors. The central area of the room also has several personal computer terminals, whose monitors can be shared with the main wall monitors. Karas and his colleagues use the room to go through mugshots of potential suspects and to watch surveillance videos together. Wall-sized computer/TV monitors will be old technology by 2054. In fact, TV screens that take up entire walls of houses and offices should become common by the end of the 2030s. The screens will probably be thin, flexible, and installed as if they were wallpaper.
By 2054, the screens will probably be capable of displaying ultra high-res holographic images that seem to pop out at the viewer. Many of the characters in Renaissance were in their 20s, meaning they were born too late to have known what the world was like when TVs and computer monitors were discrete, relatively small objects, and not every seemingly inanimate wall could suddenly come to life with moving pictures and interact with you. This is just one example of how technology will become increasingly invisible yet omnipresent as time passes–ever-more integrated into our surroundings and bodies.
People will have enhanced eyes with HUDs and the ability to see through solid objects. Karas has technologically enhanced vision that lets him see simple shapes and alphanumeric characters overlaying things in his field of view (ex – people have circles around them), and that lets him see ghosted silhouettes of people who are fully or partly obscured by solid objects, such as an armed bad guy hiding behind a tree trunk. His eyes look normal, so the abilities must be thanks to contact lenses or devices implanted inside his eyeballs. These enhanced vision capabilities will exist in 2054. Several different technologies are being represented here, so let me parse them out.
First, Karas must have cameras on his person that are continuously scanning his environment, and which are able to quickly recognize what they see. Circles are displayed around people because the image recognition algorithms in Karas’ personal devices know what humans look like. As Facebook’s face detection algorithm demonstrates every time you upload photos of people, computers are already excellent at recognizing distinctively human features in photographs. Getting them to make those identifications in camera video feeds is simply a matter of increasing the processing speed of the same algorithms. After all, a video feed is nothing more than many still photos presented in quick succession. I have no doubt that portable personal computing devices will be able to do this by 2054.
Second, Karas’ augmented vision device allows him to “see through” solid objects, mainly to spot bad guys he’s trying to shoot. Such obstructing objects include a large concrete sculpture and a thick tree trunk. Your first guess about how he is able to do this is probably “heat vision,” and it is also wrong. Thermal vision cameras can’t actually see through solid objects. Being able to see non-visible portions of the light spectrum like infrared and ultraviolent is also unhelpful since they can’t pass through large solid objects, either. Radio waves would pass through the object and the person, so you wouldn’t get useful information about what was on the other side.
I think what’s really going on is Karas is not actually seeing through solid objects: his visioning device is using camera footage of his surroundings to rapidly build a 3D model of the room–including the places where people are standing–and then superimposing virtual images of human silhouettes over solid objects to give him an idea of where people are hiding as they become obscured by those objects. Whenever he has a clear line of sight to someone, Karas’ devices note their location in 3D space, and continue displaying their last known location as a silhouette even if they become hidden from view by a large object. In cases where people’s bodies are only partly concealed by objects, Karas’ device builds a partial silhouette of the hidden part of their body based on their posture, biomechanics, and the bilateral symmetry of the human body. This capability would require similar visual pattern recognition technology as the HUD, and portable, personal computing devices will be able to support it by 2054.
It’s also possible that Karas’ visioning device makes use of reflected light to “see” people who are hiding behind objects. Several groups of researchers have experimented with different variations of this nascent technique, but they all involve using one or two light emitters to send pulses of light towards a freestanding object, and then carefully analyzing the subsequent patterns of light reflections to piece together what the obscured backside of the object looks like. The pulses of light are invisible to the naked eye. Devices that do this could be man-portable by 2054, though I doubt they will be so small that they could be incorporated into contact lenses or eye implants. Something the size of a gun scope is more realistic.
Third, Karas is able to have his enhanced vision without wearing bulky goggles or even thin-framed glasses. The virtual images thus appear in his field of view either thanks to augmented reality contact lenses or eye implants. While computers and cameras will be much faster, smaller, and better in 2054, I doubt something as small as a contact lens or eye implant could do all of this computation. Powering the devices would also be a major problem, even if they had integral batteries that were 10x as energy-dense as today’s. Heat dissipation would also be a problem, as the waste heat generated by the battery and processor could literally burn your eyes out.
With these impracticalities in mind, I think Karas must have some other, larger computing device on his person–perhaps just a smartphone in his pocket–that does all the data processing and contains a power source for all his worn devices. Data and electricity would be shared through a local area network (LAN): The smartphone would receive wireless video feeds and other data from tiny cameras and sensors Karas had embedded in his clothing or maybe in his eye device, the smartphone would then do the image analyses described in this section, and then it would beam data signals and electricity to Karas’ eye devices, telling them what virtual images to overlay over his field of vision. This way, the eye devices wouldn’t get hot and wouldn’t need integral batteries of their own. A real-world 2054 scenario might also involve Karas wearing more substantial sensor devices, like something attached to his pistol or integrated into some type of headwear, to collect the scanning data.
Finally, let me point out that augmented reality glasses could do all of this without a LAN, and glasses will be old tech by 2054. The Avalon corporate thugs wore goggles that also gave them augmented vision, including telescopic zoom ability. They also had sensitive, directional microphones somewhere on their kit, which, along with the goggle zoom, allowed them to spy on Karas from long distances.
Holodecks will exist. After being abducted, Ilona is imprisoned inside a medium-sized room that is similar to a holodeck from Star Trek. From a different room, her mysterious captors can use a desktop computer to change the appearance of the room to simulate different environments. When the “forest” environment is selected, the room’s bare white walls, floor and ceiling change in appearance accordingly: virtual grass and trees sprout from the ground, and in the distance, there only appears to be more vegetation.
While the holodeck’s operating principles are never explained, I think it is based on the same 3D hologram technology that has replaced paper cards and rectangular tablet computers in the film. And as I said before, 3D holograms that float in fixed points in space are impossible. However, a similar effect could probably be achieved by covering the walls, floor and ceiling with the paper-thin displays that could show holographic moving pictures that seemed to pop out at the viewer. Tiny cameras could track the gaze and posture of the person inside the holodeck, and continuously adjust the pictures being displayed on the room’s giant displays to compensate for changes to their visual perspective resulting from their movement. However, even if you could get this to work, the holodeck user experience would be severely limited since you wouldn’t be able to walk far before your face hit a wall, which would ruin the illusion (at one point, Ilona runs around her holodeck prison in frustration but implausibly, doesn’t hit anything).
The whole floor could be an omnidirectional treadmill whose surface was made of a flexible holographic display, but even in 2054, that setup is going to be very expensive. In 2054, for full-immersion virtual reality experiences, it’s going to be much cheaper and better to use VR glasses, earpieces, and maybe a tactile body suit, and at the rate things are going, I’m sure all of those will be mature technologies by then.
To summarize: By 2054, it will be possible to make virtual reality holodeck rooms where you could experience some environment like a forest, but it won’t look as good as what was in Renaissance, actually exploring the environment by walking around will be problematic, and there will be very few holodecks because there will be better ways to access virtual reality.
Cell phone implants will be in use. Karas wears a nickel-sized device behind his right ear that is embossed with the “Motorola” symbol and serves as a cell phone by transmitting telephonic sounds to him. Whenever someone calls him on the phone, he hears their voice in his head.
The device is worn in the same place as real-life bone-anchored hearing devices for people with hearing problems, so it probably works via the same principle of conducting sound waves through the skull into the inner ear. There might even be a direct wire link to the auditory nerve. Karas removes it by simply pulling it off with his fingers, which makes me think the device has two parts: one has been permanently installed in his body via skull surgery, and the other is the removeable circular piece, which probably contains the power source, microphone, and maybe computer processors. The detachable piece could be held on by magnets or an advanced adhesive, though keeping it from being accidentally knocked off by your shirt or jacket collar rubbing against it could be a very hard engineering problem.
While this technology is feasible for 2054, the fact that it requires a hole to be drilled into your skull will hold back its widescale adoption until we have developed very advanced surgical methods that are also very cheap. Don’t expect that until long after 2054. However, it’s conceivable that implants might be better than worn devices like Bluetooths and hearing aids–especially if they directly interface with human auditory nerves–and as such could come into common use among police officers, soldiers, spies, and other elite people whose professions directly benefit from having heightened senses. Small numbers of those people might have implants.
In 2054, it’s much more likely that people who want to do hands-free phone calls will buy removable earpieces, like today’s Bluetooth Headsets.
People will do video calls all the time. Karas’ hearing and vision devices let him do several video calls with his boss and colleagues. He hears their words through his hearing device, and sees their faces in front of him as ghosted HUD footage thanks to his eye devices. (Presumably, the people on the other end have webcams pointed at their faces.) So, while Karas is walking down the street running errands, he’s also seeing his boss’ semi-transparent head floating in front of him and hearing her voice in his head. To other people on the street, he seems to be talking to himself when he’s actually talking to her. (Telling schizophrenics apart from normal people will be that much harder in the future.)
The technology of 2054 will make this scenario possible, though I doubt people will use it much since there’s usually nothing to be gained from seeing the other person’s face. In fact, it often makes interactions less pleasant and more unwieldy, especially when you’re conversing with your naggy boss or an emotional colleague. Many people also want to stay unseen due to insecurities about their looks.
People have already shown a preference for minimalism in digital communication with texting increasingly replacing audio phone calls. There’s no reason to assume this trend will flip in the future and people will want to do video calls for every small thing.
A cure for aging will have been found. A crucial plot twist happens when Karas discovers Ilona had made a breakthrough in her anti-aging research right before she was kidnapped. The full details are never revealed, but it is said to be some kind of gene therapy that halts the aging process in humans. Such a thing would radically extend human lifespan, though it wouldn’t make humans truly “immortal” since we would still die from causes other than aging, like infectious diseases, accidents, murders, and suicides. I doubt such a cure will be found that soon, but lifespans will still be significantly longer in 2054 than today, and part of the gain will probably owe to drugs that slow, but don’t stop, the aging process. Some lifespan gains will also come from technologies allowing the replacement of worn-out organs.
From what little we know about the aging process and its complexity, it is already obvious that there will never be a simple, one-shot cure for it. Instead, a combination of many different technologies (in situ stem cell therapies, organ cloning, synthetic organ implantation, maybe brain transplants into newer bodies) will extend life and then, in the very long run, defeat aging and death. I don’t expect that until well into the 22nd century.
There will be transparent floors. In Renaissance Paris, many of the city’s highways have glass enclosures built around them, effectively turning them into tunnels. Pedestrians can walk over the flat roofs of those tunnels and see the cars below. Some underground Metro stations also have glass ceilings that function as glass floors for people walking above, at street level.
It’s an interesting infrastructure idea actually has merits beyond just being aesthetically pleasing. Enclosing the roads like that improves safety for both drivers and pedestrians since there’s far less risk of someone walking into the roadway. The highway is also no longer a barrier to human movement, which improves the walkability and potential uses of the topside space. The glass enclosures also contain the road noises and any air pollution the vehicles might be making (the tunnel air could be run through filters). The fact that the glass lets in natural sunlight to recessed highways and Metro stations that would otherwise be artificially lit is also of psychological benefit to users of both.
The only problem with this idea is that it would give perverts easy views up ladies’ skirts. Of course, that could be fixed by slightly frosting over the glass or by incorporating distorting undulations into the material, as is commonly done with glass building blocks today.
It’s very possible that we could have discovered some transparent material that exceeds glass’ strength and cost performance to such an extent that it is economical to use as a building material as it was in the film. It would be a desirable feature in stylish cities like Paris.
I thought I’d take a break from killer robots and Ray Kurzweil to write a summary of a book I recently read, interspersed with my own thoughts (which are in square brackets). Though at first glace, the book Amusing Ourselves to Death might seem out of place in this blog, it focuses on technology (specifically, television) and its effect on 1980s culture.
My interest in this book was piqued about two years ago when I started hearing it mentioned for its alleged prescience predicting the rise of today’s frenetic social media culture and “cancel culture.” After reading it, it’s clear that many of the defects of 1980s TV culture have carried over to 2020s internet culture, and in that sense, it is prescient. However, the book is in equal measure a time capsule that documents a defunct era, and as such, it serves as useful contrast against the way things are in the present era, and helped me to see how the shift in the dominant technological medium (TV to computer/internet) has changed American culture and behavior.
Doing this led to me to make unhappy realizations, which I invite you to read in the square brackets rather than in a summary here, as this isn’t that long of a blog post.
Chapter 1 – The Medium is the Metaphor
American culture is now focused on amusement. Politics, religion, and social discourse are presented to Americans as entertainment products.
Some proof of this is evident if one considers that the U.S. President at the time of the book’s publication, Ronald Reagan, was a former movie actor. Other presidential candidates were also former TV personalities.
To win a U.S. Presidential election, a candidate must be telegenic. This attribute is just as important as others that are much more critical for the position, like intelligence.
Through studying their audiences, news media companies discovered that viewers would watch news programs more regularly and for longer periods if the newscasters were telegenic. This is why newscasters are now almost universally good-looking and well-spoken.
There are different mediums of communication, and each medium has unique characteristics that determine which types of content it can convey. Each has strengths, weaknesses, and limits:
Smoke signals can’t convey complex ideas, so it is impossible to use this medium to discuss philosophy.
Television (TV) is a visual medium, so it conveys ideas and stories principally through images and not through words. Things that look unappealing evoke negative reactions from viewers. As a result, an obese man like William Taft couldn’t become President in today’s era of political commercials and televised debates, even if he were in fact the best-suited candidate for the position. [Donald Trump won the 2016 election in spite of being obese and physically ugly in other ways. However, his prowess as a showman overcame those deficits, at least among a sufficient number of American voters to secure him a narrow victory.]
This book’s core thesis is that TV is fundamentally unsuited as a medium for the complex discussion of ideas.
The telegraph brought the “news of the day” into existence: Instant communication allowed everyone to be aware of events everywhere else on the planet, which might sound like a good thing, but hasn’t been because of how the new information has been used. Most of the information presented in the “news of the day” is irrelevant to any particular consumer because it has no impact on him and/or because he can’t exert any influence on the people and events described in it. Stories that fill the news of the day are also usually presented without enough context for consumers to understand them or to draw the proper conclusions from them.
The author, Neil Postman, met Marshall McLuhan, and some of the latter’s ideas influenced this book. However, Postman also disagrees with McLuhan on some points.
A culture’s dominant communications medium will determine how it thinks. America is a TV-dominated culture. [As of the time this analysis is being written, America is well into a transition to being an internet-dominated culture, which is even more hostile to intelligent discourse and maturity. Neil Postman died in 2003, before the invention of smartphones and before the rise of social media, internet celebrities, “curated realities,” and “echo chambers,” and I think he’d view today’s situation as even worse than it was in the 1980s.]
The advents of past technologies have changed how humans think, and expanded what we were capable of imagining.
The invention of clocks changed the human relationship with time. Seasons and the sense of eternity lost importance once people had an accurate, finely gradated way to measure time.
The invention of writing allowed humans to synthesize more complex ideas. Once written down, ideas can be studied, their flaws found, and the ideas either rejected or revised.
Writing also allowed ideas to spread faster and more widely, since they persisted over time and could be received by more people.
America is transitioning from a print/writing culture to a visual culture.
Chapter 2 – Media as Epistemology
TV has made American public discourse silly and dangerous.
The medium determines what is considered to be “true.” Proof:
Oral cultures that lack writing systems rely on proverbs and sayings to remember what is “true” or “right.” “Haste makes waste” is a good example of one of these. In oral cultures, these will be more commonly known and taken seriously.
A relevant anecdote from when the author was examining a Ph.D. dissertation: ‘You are mistaken in believing that the form in which an idea is conveyed is irrelevant to its truth. In the academic world, the published word is invested with greater prestige and authenticity than the spoken word. What people say is assumed to be more casually uttered than what they write. The written word is assumed to have been reflected upon and revised by its author, reviewed by authorities and editors. It is easier to verify or refute, and it is invested with an impersonal and objective character…The written word endures, the spoken word disappears; and that is why writing is closer to the truth than speaking. Moreover, we are sure you would prefer that this commission produce a written statement that you have passed your examination (should you do so) than for us merely to tell you that you have, and leave it at that. Our written statement would represent the “truth.” Our oral agreement would be only a rumor.’
[One of the worst aspects of social media is that content can be produced and circulated instantaneously, like speech, but that it persists permanently, like writing. As a result, social media is awash in impulsive utterances that unfairly destroy careers and lives in seconds.]
The ancient Athenians considered “rhetoric,” the persuasiveness and emotion of an oral performance, to be the best measure of its truthfulness. Good public speaking skills were prized personal attributes. [The problem in elevating this to such a high level of cultural importance is that it is entirely possible for a person to be persuasive and dishonest at the same time. The quality or truthfulness of an idea shouldn’t be judged based on how well the person espousing it can debate or think on his feet. A responsible citizen takes the time to study all sides of an issue alone and to make a dispassionate judgement, and doesn’t let himself be swayed by someone who is skilled in manipulating his emotions or forcefully presenting only one half of the story. “You have to convince me” is a lazy and unintellectual stance.]
Side note: In spite of their seminal contributions to Western civilization, the philosophers of ancient Greece made the monumental flaw of assuming that all knowledge could be gleaned through deduction. In other words, starting with a handful of facts that were known to the true, they believed they could use reasonable assumptions to discover everything else that was true. This was a fundamentally anti-scientific way of thinking that stymied them, as it led them to believe that new knowledge didn’t need to be gained by running experiments.
Different types of media put different demands on people, leading to those people forming different values:
In oral cultures that lack writing, people value the ability to easily memorize things, and the better your memory, the smarter you are perceived as being.
In print cultures that have writing, having a good memory is much less important since any person can look up nearly any piece of information. Being able to memorize and recite facts is useful for trivia. People who are able to sit still for long periods in silence reading books, and who can easily absorb the things they read, are perceived as being smart.
Different types of media encourage and nurture different cognitive habits.
TV is an inferior medium to print when it comes to conveying serious ideas.
However, the TV medium has some positive attributes:
Having a moving, talking image of another human being in the room with you can provide emotional comfort. TV makes the lives of many isolated people–especially the elderly–slightly better.
Films and videos can be highly effective at raising awareness of problems, like racism and social injustice. [The implication is that seeing a lifelike image of someone else suffering is more emotionally impactful than merely reading about it or listening to a third person speak about it.]
Chapter 3 – Typographic America
In 1600s New England, the adult literacy rate was probably the highest in the world. The region was heavily Protestant, and their faith emphasized the importance of reading the Bible to have a more direct relationship with God, so literacy became widespread.
England’s literacy rate was slightly lower than New England’s.
New Englanders also valued schooling, which is another reason why literacy rates were high.
Even among poor colonial New Englanders, literacy rates were high, and reading was a common form of recreation.
The political essay “Common Sense” was published in 1776 as a short book that could be bought cheaply. The percentage of Americans that read it within the first few months of its publication was comparable to the share of Americans that watch the Superbowl today.
Newspapers and pamphlets were more widely read in colonial America than they were in Britain.
By 1800, the U.S. was a fully “print-based” culture. Even in poor parts of the South, literacy rates were high and reading was a common daily activity. The best American authors were as famous then as movie stars are now.
Attending public lectures also became a popular pastime, and by 1830, there were 3,000 lecture halls in America. Average people commonly went to local lecture halls after work to see presentations about academic subjects, as well as to see debates.
The fact that the U.S. was founded by upper-class, intellectual people helped establish the country’s literary culture.
The printing press made epic, lyrical poetry obsolete.
Chapter 4 – The typographic mind
In 1858, U.S. Senate candidates Abraham Lincoln and Stephen Douglas toured Illinois together and held public debates with each other over the subject of slavery. The events took place in seven cities, were well-attended, and each went on for hours. They became known as the “Lincoln-Douglas debates.”
The transcripts of the Debates still exist, and show both men were extremely gifted orators. Their statements were information-dense and assumed a high level of knowledge on the part of listeners; none of what they said was dumbed down. The fact that average people who attended the Debates could understand them indicates that the Americans of the 1850s had better attention spans, listening skills, and probably reading comprehension skills than Americans today. Such are the advantages of being in a print-based culture. [Note that this book was published in 1986, and there’s a widespread belief among Americans now, in 2021, that the internet and personal computing devices have made those three attributes even worse.]
Today’s TV culture promotes stupidity and stupid thinking, by comparison.
The Lincoln-Douglas debates were civil and complex, and so were their audiences. While the Debates were of course conducted orally, much of what was said came from written notes.
By its nature, writing must always convey some kind of proposition. [Meaningless writing might take the form of a series of random words, or bad poetry that no one can understand.] Thus, a print-based culture encourages meaningful and intelligible discourse.
‘Thus, reading is by its nature a serious business. It is also, of course, an essentially rational activity. From Erasmus in the sixteenth century to Elizabeth Eisenstein in the twentieth, almost every scholar who has grappled with the question of what reading does to one’s habits of mind has concluded that the process encourages rationality; that the sequential, propositional character of the written word fosters what Walter Ong calls the “analytic management of knowledge.” To engage the written word means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning. It means to uncover lies, confusions, and overgeneralizations, to detect abuses of logic and common sense. It also means to weigh ideas, to compare and contrast assertions, to connect one generalization to another.’
[I independently came to the same conclusion years ago. Written communication’s great advantage is that it forces a person to reflect upon his own thoughts and to organize them into a rational form. This is crucially important since raw human thinking is chaotic, fragmentary and impulsive. This also leads me to believe that mind-reading technologies that allow people to share thoughts will have major downsides. Having direct access to another person’s inner monologue in real time could be confusing and lead to strife as you became aware of every fleeting thought and uncontrollable impulse they had. In most cases, it would be preferable to wait a little longer for them to convert their thoughts into spoken or written words.]
Reading and writing require and encourage grounded, meaningful, analytical thinking. Watching TV does not.
By necessity, writing must be orderly, so reading encourages orderly thinking. It even promotes more orderly verbal discourse between people.
It’s no coincidence that the Age of Reason happened while print culture was at its peak in the West:
Rise of capitalism
Rise of skepticism of religion
Divine right of kings rejected
Rise of idea of continuous progress
Rise of an appreciation for the value of mass literacy
Early American theologians were brilliant, literary men who valued education, including in secular subjects. Congregationalists founded many important universities that still exist.
The different effects of print culture and TV culture on religious discourse are evident if one compares the sermons and religious essays of John Edwards with those of Jerry Falwell. Edwards’ ideas are complex and logically argued, whereas Falwell’s are simpler and designed to play on the listener’s emotions.
Newspaper ads were originally lineal and fact-based. During the 1890s, they changed so as to be amusing and to appeal to consumers’ emotions. The Kodak camera ad featuring the jingo “You Press the Button, We Do the Rest” was the first “modern” ad.
Though no one knew it at the time, this was a bad milestone for print culture, as it marked the dawn of an age when printed words and images would be crafted to manipulate emotions and human psychology, rather than to appeal to reason and to present complete ideas.
Without televisions or even many photos (even in newspapers), Americans in the 1800s knew famous people through their writings and ideas. Few would even have recognized their own President on sight. By contrast, because today’s TV culture is visual and disjointed, we know famous people by their faces and soundbites. [Videos of American political activists being interviewed on the street and asked to name one accomplishment or policy stance of their preferred Presidential candidate attest to this. Often, a person waving around a political placard with a politician’s face on it can’t describe what that politician stands for or plans to do if elected.]
‘To these people, reading was both their connection to and their model of the world. The printed page revealed the world, line by line, page by page, to be a serious, coherent place, capable of management by reason, and of improvement by logical and relevant criticism.
Almost anywhere one looks in the eighteenth and nineteenth centuries, then, one finds the resonances of the printed word and, in particular, its inextricable relationship to all forms of public expression. It may be true, as Charles Beard wrote, that the primary motivation of the writers of the United States Constitution was the protection of their economic interests. But it is also true that they assumed that participation in public life required the capacity to negotiate the printed word. To them, mature citizenship was not conceivable without sophisticated literacy, which is why the voting age in most states was set at twenty-one, and why Jefferson saw in universal education America’s best hope. And that is also why, as Allan Nevins and Henry Steele Commager have pointed out, the voting restrictions against those who owned no property were frequently overlooked, but not one’s inability to read.’
Chapter 5 – The Peek-a-Boo World
In the mid-1800s, two ideas changed American discourse: 1) instant communication (e.g. – speed of ideas and news no longer limited by how fast a person can travel), and 2) the birth of photography.
The telegraph and Morse Code unified and redefined public discourse in the U.S. Previously, the vast majority of news Americans knew of was about local events and local people. It was directly relevant to them, and they could exercise some influence over it. However, the telegraph made it possible for people to hear about events and people from the far-flung corners of the planet, instantaneously. This exponentially increased the quantity of “news” content average Americans were exposed to. However, the vast majority of this new information was irrelevant to them, was about people and things they couldn’t control, and was usually presented without enough contextual information.
Before the telegraph, news was presented rationally, and was about urgent things that had some direct impact on the people receiving the news. After the telegraph, the news largely consisted of irrelevant information that profit-hungry news media companies picked for shock value and entertainment value.
For proof of this, ask yourself the following questions:
Aside from weather reports, when is the last time a news story that you heard or read about in the morning convinced you to change your plans for the day, or to take some kind of action you wouldn’t have otherwise taken?
When is the last time something you learned from a news report helped you to solve a problem in your everyday life?
The news is mostly trivia. Like sports, it gives people something to talk about, but has no tangible use.
The telegraph created an “information glut” across the world, for the first time in history. However, most of the information has never been useful to most of the people receiving it.
The information glut also changed the cultural definition of what counts as a “smart” person. Smart people are now those who have a very broad but shallow knowledge of disconnected things, most of which are irrelevant to everyday life.
Before the telegraph, the stereotypical “smart” person was one who had deep, contextualized knowledge about a small number of topics. Also, people sought out information for its usefulness to them, they were not awash in a sea of useless information.
[But by this logic, weren’t many of the attendees to the Lincoln-Douglas debates “wasting their time” since they spent hours listening to two men talk about a subject that had no bearing on their daily lives since Illinois was not a slave state and none of them had black friends? The institution of slavery didn’t directly affect them, so wasn’t the subject mere trivia for them? Learning about and talking about things that have no relevance to the needs of the moment, and that affect people different from you is basic civic engagement, and not doing it is just as damaging to a culture as having everyone watch foolish TV programs all day. Though the author could surely render a satisfying answer to this paradox if he were alive, he doesn’t do so in the book, which is a mark against it.]
Photography is a shallow medium since it can’t convey internal states or depict meaning with the same depth as the written word. [I don’t fully agree. Also, recall that the author praised the TV medium’s effectiveness at raising awareness of problems, like racism and social injustice, by depicting human suffering in a way more visceral than the written word. Well, a video is nothing but a series of photographs showed in rapid sequence, so why shouldn’t it be true by extension that photography has the same virtues as video? After all, there are countless, famous photographs that have raised the public’s consciousness about important social issues and tragedies.]
It can also be a deceptive medium since photos can remove images of events and people from their contexts. Like the telegraph, it presents an atomized vision of reality where context is missing. [As an amateur photographer, I strongly agree with this. Walking around on a normal day, and in a not particularly interesting or unusual place, it’s quite possible to take snapshots of objects, people, and landscapes that, thanks to some trick of the lighting, camera angle, or momentary facial expression from a subject, look dramatic or emotionally evocative, and don’t portray what that scene really looked like or felt like to the people who were there at that moment. Black-and-white photography’s stylized appearance and the often-coarse appearance of developed film lends itself particularly well to this.]
It was soon found that news articles and ads that included photos were more eye-catching to people than those without.
“Pseudo-context” refers to how news publishers structure their articles to make them seem relevant and coherent to consumers, when in fact they have neither of those qualities. It’s a deception meant to hide the fact that consumers are being exposed to vast amounts of disconnected stories and facts about irrelevant things.
“Pseudo-events” are events that are deliberately staged to be reported upon by the news media, and in a way that benefits the people who have staged it. Press conferences and speeches to supporters are common examples. Pseudo-events have the superficial trappings of being important and significant, but they actually convey little or no useful or new information. Daniel Boorstin coined the term “pseudo-event” after observing the phenomenon.
[From other research, I found useful contrast between a “real” event with real consequences, and a pseudo-event that merely gives off the impression of being consequential: If the owners of a hotel want to boost their establishment’s value and appeal to customers, a legitimate strategy would be to improve some aspect of the hotel or their operations. This might involve hiring a better chef, installing new plumbing, or repainting the rooms, and then publicly announcing that the changes had been made. An alternative strategy, which could be just as effective at boosting profits, would be to hold a “pseudo-event” in the form of a banquet celebrating the hotel’s 30th anniversary. Important members of the community would be invited and praised, the owners of the hotel would make speeches about how it had somehow served the community, and members of the media would be invited and would almost certainly publish glowing news stories about the event. The perception that the hotel was better and more important than it actually was would be created in the minds of news consumers.]
[Thanks to social media and the proliferation of cable TV channels, we now have what could be called “pseudo news” shows, which superficially resemble respectable, traditional news broadcasts since they have charismatic presenters and move from discussing one recent event or pressing issue to the next, but which are actually entertainment and/or editorialization shows. Real events are brought up, but discussed in misleading ways. The viewer walks away from such a show thinking they are now well-informed, but in fact, they might have been better off not watching the show and never hearing about the event at all.]
Thanks to the information glut, we live in a “peek-a-boo” world full of nonsensical things that are presented to us in entertaining ways.
As a medium, television takes the worst and most distinctive elements of telegraphy and photography to new extremes. TV content is even more decontextualized, deceptive, irrelevant, and slanted towards amusement and shock value.
America now has a “TV culture,” whose features are antithetical to the nation’s former print culture. The deficiencies of TV as a medium make it fundamentally unsuited for supporting intellectual thinking or discourse.
Chapter 6 – The Age of Show Business
TV culture attacks literary culture
[Why does the author skip a discussion of radio culture by jumping from print culture to TV culture?]
American-made TV and film content is a major export. People in other countries consider it more entertaining than their own content. U.S. TV shows and films are more emotionally evocative, visually stimulating, and entertaining. [My years of traveling to other countries confirm this is true. In spite of how hollow and socially corrosive American pop culture is, it excels like none other at appealing to humans across the world. Additionally, the most successful TV shows and films indigenous to other countries usually copy elements from their American counterparts.]
All TV content is presented as entertainment. Even somber news shows are glitzy and entertaining.
The 1983 broadcast of the TV film The Day After was the most prominent attempt to use the TV medium for a serious, intellectual purpose. The film is a docu-drama about a nuclear war between the U.S. and U.S.S.R., and is jarring and disturbing to watch. The national broadcast was presented without commercial interruption, and was punctuated by comments from a panel of well-known American intellectuals including Carl Sagan and Henry Kissinger. Nonetheless, the broadcast failed in its attempt to foster meaningful discussion or insight into the topic, due to the limitations of the TV medium.
For example, the members of the panel never had a real “discussion” with each other–they delivered prepared talking points and avoided deeply addressing each others’ ideas.
A fundamental problem with TV as a medium is that people come across as stupid and/or boring if they pause to think about something, or if they appear uncertain about something. The medium is friendly to people who can give quick responses and who come prepared with rehearsed performances. Hence, TV is unconducive to most intellectuals and to “the act of thinking.” [This is extremely unfortunate, since the best ideas typically come after considerable time spent thinking, and since many great thinkers are not also great performers.]
Studies show that people instinctively prefer TV content that is visually stimulating and fast-paced. This means the sorts of TV programs that could be intellectual and serious, like two smart people sitting at a table having a long, focused discussion, are not considered as interesting. Since TV networks are always striving to find content that generates the highest ratings and hence profits, they naturally eschew those kinds of intelligent, serious programs in favor of flashy, entertaining programs.
[The rise of long-format podcasts in the 2010s partly contradicts this.]
In the U.S., all cultural content is filtered through the TV medium, and as such has acquired the negative qualities of typical TV programming. News programs are glitzy, shocking and entertaining when they should be serious, and religious broadcasts are also made to be entertaining rather than contemplative.
Because everything on TV is presented to Americans this way, Americans have come to expect everything to be entertaining:
Legal trials about serious crimes like murder are televised for entertainment and shock value.
Education courses include more and more videos that present subjects as entertainment.
The 1984 Presidential debates between Ronald Reagan and Walter Mondale were nothing like the Lincoln-Douglas Debates. Instead of spending a lot of time deeply discussing and debating a narrow range of related issues, the 1984 Debates only devoted five minutes to each issue, which was an impossibly short amount of time to discuss any of them in depth or for one participant to rigorously cross-examine the other. There was little focus on the candidates’ ideas or logic undergirding their ideas. Instead, it was a contest of who could get out the best “zingers” and who looked better in front of the camera. [The 2016 and 2020 Presidential Debates were infinitely worse.]
Chapter 7 – “Now…this.”
The expression “Now…this” is commonly used on news broadcasts when moving from one story to the next. It forces viewers to stop thinking about one thing and to focus on another. Its use shows how the news us full of disconnected events, people and ideas.
Studies of viewer preferences show that people are more likely to watch news broadcasts that have physically attractive anchors. News media companies have thus gravitated towards hiring attractive anchors to maximize their ratings and hence profits. [The profit motive is behind most of the dysfunctions in TV and internet news.]
Studies also show that humans are likelier to believe something if the person saying it appears sincere. This more intangible but still detectable quality is also used as a basis for hiring and promotion decisions at TV news stations. This is problematic because such things as skillful liars exist, and there’s no reason an off-putting people can’t be speaking the truth about something.
[To be fair, since this book was published, science has learned a large amount about how nonverbal aspects of communication in the forms of facial microexpressions, eye movements, body language, appropriateness of emotional displays, and other unconscious aspects of speech and behavior reveal deceit. In most cases, people’s instincts let them accurately detect dishonesty or malice.]
The fact that news anchors must recite their lines with a more-or-less upbeat tone, even when describing tragedies, lends a degree of unreality to TV news and prevents the TV medium from accurately conveying the sense of tragedy or loss associated with the event. [What’s the alternative? Should news programs relentlessly dwell on every report of a major loss of life so as to make sure viewers end up feeling depressed and disgusted? It’s a big world, and on any given day, a major loss of life or gruesome crime is happening somewhere, and portraying those events in ways that accurately conveyed their impacts would make the daily news too traumatic and emotionally draining for people to watch.]
[The author’s complaint that TV news anchors lack emotional investment in the stories they report on is obsolescent. The internet age has caused the news media landscape to fragment into thousands of smaller outlets catering to highly specific demographics of viewers. The anchors who lead these new programs are guilty of the opposite sin–overinvestment of emotion into their reporting, and to such a degree that any pretense of neutrality (and sometimes, adult maturity) is sacrificed. The inhuman detachment of 1980s TV news anchors has mostly been replaced by excessive outrage, crocodile tears, sanctimony, and sarcasm.]
“Now…this” is also often used as a lead-in for commercials. The seriousness of news broadcasts is undermined by the fact that they are punctuated by commercials, which are usually lighthearted.
TV news shows avoid complexity and move through a diverse range of stories and topics quickly.
Partly as a result of news broadcasts’ deficiencies, Americans are poorly informed about people and events outside of their country.
Again, the features and limitations of TV as a medium of communication alter how news is presented through it. TV news programs will inevitably gravitate towards presenting news content as entertainment, and as a series of disconnected, bite-sized stories. The result is in fact “disinformation” since it leaves viewers with the false impression that watching a news broadcast has made them well-informed about events, issues and important people, when in fact they aren’t.
TV news broadcasts also annihilate the sense that a “past” exists because all they depict is a churning of “present” events. Things that happened in the past are quickly muscled out by a deluge of new things. The perpetual focus on the present moment makes it harder for news consumers to notice lies and inconsistencies, as the news seldom has the time to dredge up older things that a person said or did that proved to be wrong or contradict what they are saying or doing now.
[Again, the internet age has turned the problem on its head. Because every famous person’s quotes and records of their actions are now available on the internet and instantly searchable, it has become easy to find every tasteless statement, lie, and contradiction, and to package them into a bite-sized product like a social media meme. With access to a lifetime’s worth of records, you can make any person look like an evil liar. If the TV culture of the 1980s was one where there was only ever a “present moment,” the new internet culture is one where you can pick whatever moment you want to live in. If you don’t like a specific politician, you can curate your social media and TV news bubble so as to only allow in negative content about them, including every lie or crass statement from decades ago. As a result, this is an age of cynicism and self-righteousness. While the TV news “gatekeepers” of the 1980s had their flaws and biases, they were more sensible and grounded in reality than the multitudes of amateurs who today manufacture biased memes and make extremist podcasts, and define what “reality” is for a large and growing share of the human population.]
Print culture encourages the opposite mindset. Since it is easy to turn pages back and forth in a book or newspaper, readers are aware of context and of the linear order of events, and they can spot lies and inconsistencies by cross-referencing different passages.
Aspects of Aldous Huxley’s dystopia, described in his book Brave New World, now exist in modern America. The government has no need to censor anything because its citizens are so occupied with silly pursuits and so easily misled by corporate-manufactured disinformation that they have no time or interest in uncovering the truth about the world. Specifically, accurate reporting about important events and people can still be found in America, as can thoughtful discourse about every issue and problem, but few Americans pay attention to it, largely because they consider it to be too boring. The market has given Americans what they want, and it is trash TV and dumbed-down news programs.
Even newspapers are mimicking aspects of TV news broadcasts. USA Today is the leading example of this transition.
Radio is more resistant to the transition, but it is declining nonetheless. Radio broadcasts increasingly resemble TV programs, in the worst ways.
Chapter 8 – Shuffle off to Bethlehem
Televangelists are the new faces of Christianity in America.
Episodes of the 700 Club are slickly made, entertaining, comforting, and superficially serious in tone.
Televangelist shows always focus on the preacher and his personality. God is never the central figure in the broadcasts, and instead exists in the background. Major religious themes like hallowed rituals and achieving transcendence through religion are absent.
Again, the TV medium forces televangelist shows to have these qualities.
The social and psychological meaning of religion in America has changed since people started watching televangelist broadcasts.
Traditional, in-person religious services happen in houses of worship, which are quiet, and, in the case of cathedrals, grand places. The central portions of houses of worship are also only ever used for religious ceremonies. As a result, the environments naturally lend themselves to serious and contemplative thinking among visitors. In a church, a person can really immerse himself in prayer and religious thought, and pull himself out of his everyday mindset. [In the modern era of skyscrapers and technological wonders, many of the old cathedrals of Europe are still awe-inspiring. You can appreciate how those same cathedrals would have made peasants feel the grandeur of God in the Middle Ages, when most people lived in terrible conditions and had very little mental stimulation each day. Yes, the form a religious house of worship takes has a major impact on the psychology of its adherents.]
By contrast, televangelist broadcasts are watched on living room televisions in private homes. The spaces where religious services thus occur are not consecrated, and the viewer does not associate them with anything especially divine or otherworldly. Viewers associate their own TV sets with entertainment and the secular world, which unconsciously affects how they perceive religious shows. It’s nearly impossible to get into the right mindset. [Will full-immersion virtual reality fix this?]
A valuable and authentic religious experience is enchanting, not entertaining.
Chapter 9 – Reach out and elect someone
The TV commercial is now a metaphor for American politics.
Capitalism is an efficient system for allocating resources only if certain conditions exist. One of those conditions is that buyers and sellers are rational, and the other is that they are just as informed as each other about market conditions and the quality of the good or service they are considering exchanging. In reality, these ideal conditions seldom exist.
Modern advertisements, and especially TV commercials, show how reality diverges from theory in ways that encourage capitalist systems to misallocate resources:
In a rational world, companies would only create ads that contained factual information about the quality of their goods and services, and consumers would coolly study different ads to empirically determine which product among the competing companies best satisfied their needs.
In the real world, ads contain little or no factual information about the goods or services being offered, and they are instead meticulously designed to prey upon the emotions, insecurities, and psychological weaknesses of consumers. Thanks to ads, consumers are frequently persuaded to spend money on things that don’t satisfy their actual needs well, or at all, and companies offering superior goods and services can go bankrupt if they don’t market themselves the right way.
[As I’ve mentioned before, and plan to discuss at greater length in a future blog post, this inefficiency could shrink and ultimately disappear in the future thanks to better technology. In the very long run, once posthumans and/or AIs take over civilization, the phenomenon of disingenuous marketing will probably vanish since consumers will be too smart and self-controlled to fall for such tricks. Being prey to one’s uncontrollable emotions and not having the cognitive capacity to remember and mentally compare the qualities and prices of different things will turn out to be uniquely Homo sapien problems.]
In modern America, politicians use TV commercials as their primary means of communicating with voters.
By necessity, commercials must be short, and must tell simple stories about things and offer simple solutions to problems. Years of seeing political commercials like these have shaped the expectations of American voters.
To succeed, modern politicians need “image managers,” and they must have personal appeal that comes across clearly on TV. Elections are no longer decided on the basis of which candidate is the better technical fit for the position’s demands; they are decided based on who looks better on TV.
Relevant credentials for holding elected office include:
Skills as a negotiator
Past success in an executive position
Knowledge of international affairs
Knowledge of economics
Public speaking ability, physical attractiveness, and debating skills don’t have any bearing on a person’s ability to make good policy decisions in a political position. Unfortunately, few American voters grasp this, and they routinely choose candidates based on those kinds of unimportant traits. The TV medium makes voters aware of those traits.
Commercials have primed Americans to vote for politicians that have the best TV personas.
Americans don’t vote in their own rational self-interests anymore; they vote for politicians who have the best TV images. The term “image politics” describes the phenomenon.
In the past, when America was a print culture, few people saw images of national politicians. They had no clue what different candidates looked like, and had to make voting decisions based on things they read in newspapers and pamphlets, and through discussions with their peers. A candidate’s “image” was not a factor.
Because TV culture is image-based, the medium has the immediacy and decontextualized qualities of photography. In infuses a mindset among its viewers that there is only a present moment, and that the past does not exist. This is partly why Americans know so little about history.
Even in Ancient Greece, a place associated with wisdom and intellectualism, government censorship of books was common (Protagoras).
George Orwell’s prediction that Western governments would eventually resort to book censorship as a way to control their citizens proved wrong. Instead, the same end has been achieved through the creation of fickle cultures in which people don’t want to read books. Huxley’s dystopia proved accurate.
In the U.S., TV censorship is done by the three big corporate media networks, not the government. This is also not what Orwell predicted. [But as internet culture shows, atomizing the media landscape and effectively eliminating the small clique of corporate gatekeepers brings a different set of problems. Now, nothing is censored, and anyone in America can look at whatever he wants. This has led to people self-segregating into highly specific demographics with their own realities and belief systems. It has also worsened the “information overload” problem, and made it harder for people to tell which information is reliable and which is not. ]
TV programs have muscled out books in the competition for Americans’ spare time.
Thanks to TV, Americans can’t tell the difference between entertainment and serious discourse anymore.
Chapter 10 – Teaching as an amusing activity
Sesame Street is a popular show for young children that is both entertaining and educational.
The author is skeptical of claims that any type of TV program can be very educational. Again, this owes to fundamental aspects of TV as a medium. TV watching is a passive, solitary activity, whereas effective classroom instruction is an interactive and social one.
Sesame Street encourages viewers to love TV, not school. In habituating children to TV watching, it and other “educational” programs encourage mindsets and skills that are unlike those they need to excel in the classroom.
TV is the first medium to merge teaching with entertainment. [Is the internet the second?] Learning is not supposed to be pleasurable.
Three commandments of educational TV content:
“Thou shalt have no prerequisites.” A program can’t require the viewer to have previous knowledge, and it must stand alone as a complete package. The process of learning must not be depicted as a sequential one, where learning one thing establishes a foundation for one or more new things.
“Thou shalt induce no perplexity.” All information that the program presents must be simple enough for anyone to understand. This does an injustice, since many concepts are not easy to grasp, and must be thought about again and again until the learner understands them.
“Thou shalt avoid exposition like the ten plagues visited upon Egypt.” All content must be presented as a story, with everything depicted visually. The viewer should never have to read a dense passage of text on the screen or see an intellectual talking at length using complex language.
Classroom instruction is taking on more aspects of entertainment. “The Voyage of the Mimi” epitomizes everything about this trend. It is a 26-episode educational TV series focusing on lessons in science and math. A package of materials includes all the videos, along with worksheets and tests that teachers use in the classroom to accompany the footage.
[Embracing the opposite extreme, which would be an overly serious and intense teaching style where no effort was made to make lessons fun, would also create problems since many students wouldn’t mentally engage. Formal classroom settings are very artificial environments and are especially unnatural for children: For 99% of our species’ existence, there were no classrooms, and children learned things informally and each day from older children and adults, who interacted with them in informal settings or during work. ]
The effectiveness of that series and others like it is dubious. Studies show that students quickly forget almost all the new information they are exposed to in video lessons.
Similarly, people quickly forget most of what they see on TV news broadcasts. However, they remember more information if they read a newspaper. The act of reading is a better way to learn something than watching a video.
As a medium, TV is suited for entertainment, not learning.
[I think the author overreacted to the first intrusions of TV into mass education in the 1980s, possibly because he assumed the trend would continue as time passed, until someday, students only watched TV programs at school. Fortunately, that didn’t come to pass, and classroom instruction is still mostly traditional and didactic, involving a teacher standing at the front of the room where he talks and writes things on a blackboard or big screen.]
Chapter 11 – The Huxleyan warning
We are now living in a Huxleyan dystopia: People voluntarily occupy themselves with entertainment and trivialities. Politics are no longer serious.
If the situation worsens, America could experience “culture-death.”
The Orwellian dystopia is no longer a threat to the world. [It’s too early to say this. As China shows, new technologies have renewed the threat and effectiveness of government-directed mass surveillance and mass control. We could be headed for a future where it is technologically possible to monitor every human in real time, and to even infer what they are thinking and feeling.]
Americans live in an invisible, insidious prison.
America’s Huxleyan dystopia is hard to fight since no one has forcefully imposed it on us, it is not centrally planned, and it lacks a written doctrine like Mein Kampf. It is everywhere and nowhere.
As a technology, TV is destroying American culture. This is hard for Americans to see and to accept, since they have a uniquely strong faith in technology and progress. Convincing them that a technology is hurting them is a major challenge.
[Since the 1980s, Americans’ opinions of technology and progress have become schizophrenic. In the 2020s, there is widespread agreement that social media and biased TV news networks have damaged American culture and discourse, that smartphones and cleverly designed apps have made people addicted to their personal devices, and that civilizational progress has already halted or soon will, leading to a long decline of living standards and order. The preoccupation with global warming doomsday scenarios and the proliferation of post-apocalyptic future movies partly speak to the latter point. At the same time, Americans are unwilling to do much to address these problems, and very few of them are taking any personal measures to prepare for the doomsday futures they say they believe are coming.]
The author’s suggestions for fighting against TV culture:
Don’t try banning TV. It’s too popular, so there’s no hope of success, and proposing such a thing will only alienate people.
Start a cultural movement in which people take long breaks from TV watching. [Reminds me of today’s phenomena of “digital detoxing” and “social media breaks.”]
Ban political commercials.
Spread awareness of this book’s main points, including the fact that different types of media have different effects on culture and mindsets.
Ironically, an effective way to make people aware of the toxic effects of TV and of the stupidity of TV programming would be to air comedy skits on TV that mocked TV and showed how the programs stupefied their viewers. Use TV to lampoon TV.
Better public education.
The author’s passing analysis of personal computers as a medium:
‘For no medium is excessively dangerous if its users understand what its dangers are...To which I might add that questions about the psychic, political and social effects of information are as applicable to the computer as to television. Although I believe the computer to be a vastly overrated technology, I mention it here because, clearly, Americans have accorded it their customary mindless inattention; which means they will use it as they are told, without a whimper. Thus, a central thesis of computer technology–that the principal difficulty we have in solving problems stems from insufficient data–will go unexamined. Until, years from now, when it will be noticed that the massive collection and speed-of-light retrieval of data have been of great value to large-scale organizations but have solved very little of importance to most people and have created at least as many problems for them as they may have solved. ‘
[The analysis is both very wrong and very right. Personal computing devices have transformed society, the economy, and our daily habits so much since the 1980s that it’s hard to defend a claim that they have proved “to be a vastly overrated technology.” However, the author rightly predicted that computing devices paired with the internet would, like TV, inundate people with large amounts of irrelevant, decontextualized information. In fact, the problem has gotten worse since the amount of internet content available now is exponentially larger than the amount of TV content that was available in the 1980s. In the internet era, American politics have gotten more dysfunctional and childish, and elections are decided for more fickle reasons than in the 1980s. Today, Americans actually look back on the 1980s as a calmer and more hopeful era when people had better social skills. Ronald Reagan, whom the author bashes as being a superficial and dishonest man who cleverly exploited the TV medium to become President and hide his later mistakes, was much more intellectual, dignified and well-spoken than Donald Trump, who exploited social media (Twitter, specifically) to become President and to control the national political narrative during his term of office.
It’s certainly true that more data about a problem helps you to formulate a good solution to it, and that personal computing devices and the internet can be used to gather data about problems. However, the medium’s flaw is that bad data is mixed in with good data, it can be very hard for people to tell them apart, and human psychology naturally leads people to latch on to data that are psychologically or emotionally comforting to them. There’s no correlation between how comforting a belief is and how true it is.
The author’s point that the computer / internet era would enrich large organizations that found ways to leverage information technology to make money was very accurate. As of this writing, six of the top ten global companies with the highest market caps are technology companies that use customer data collection and analysis to make most or all of their money.
The author’s final prediction that computers will end up creating at least as many problems for ordinary people as they solve is debatable. Certainly, computing devices and the internet have created a variety of problems and worsened problems that existed during the TV culture era of the 1980s, but the new paradigm has also benefitted people in many important ways. For example, it has made commerce easier and more efficient since customers now have access to a much larger array of goods and services, which they can purchase by pushing a button, without having to leave home. It’s debatable whether computers an the internet have, on balance, not improved the lives of ordinary people.]
Terminator Salvation is a 2009 action / sci-fi film set in the then-future year of 2018. It follows the events of the preceding film, Terminator 3: Rise of the Machines, in which the U.S. military supercomputer “Skynet” initiated a nuclear war in or around 2005 to kick off its longer-term project to exterminate humankind. Nuclear bombs, subsequent conventional warfare between humans and machines, and years of neglect have ruined the landscape. Most of the prewar human population has died, and survivors live in small, impoverished groups that spend most of their time evading Skynet’s killer machine patrols. The film is mostly set in the wreckage of Los Angeles, once one of the world’s most important cities, but now all but abandoned.
The character “John Connor” returns as a leading figure within the human resistance, though his comrades are divided over whether his claims about time travel are true. To some, he is almost a messianic figure who has direct knowledge of events going out to 2029, including Skynet’s inevitable defeat. To others, he is just a good battlefield commander who likes telling unprovable personal stories about time machines and friendly Terminators that visited him and his mother before the nuclear war. Rivalries over military strategy between Connor and a group of generals who are skeptical of him are an important plot element.
John Connor’s father, “Kyle Reese,” is also in the film, but due to the perplexities of time travel, he is younger that Connor in 2018 and has not had sex with the latter’s mother yet. A third key character, named “Marcus Wright,” is a man who wakes up on the outskirts of the L.A. ruins with only fragmentary memories of his own life, and no awareness of the ongoing human-machine war (the first time he sees an armed Terminator walking around, he calls for its help). Unsurprisingly, there’s more to him than meets the eye, and he becomes pivotal to determining the fate of the human resistance.
I thought Terminator Salvation was mediocre overall, and had an overly complicated plot and too many characters. Keeping track of who was a good guy, who was a bad guy, and why one person was threatening or shooting a gun at another was harder than it should have been. Several of the film’s events were also silly or implausible, which inadvertently broke with its otherwise bleak and humorless mood.
At the same time, I liked how Terminator Salvation moved beyond the played-out formula of the previous three films. While the characters mentioned the importance of time travel technology to the success of the human war effort, no one actually did any time traveling in the movie. There was no desperate race to prevent Skynet from starting a nuclear war because the war had already happened. This was also the first Terminator film set in the future, not the present, which let us see a new part of the Terminator franchise universe. The acting was also pretty good.
The potential for a good movie was there, but the filmmakers bogged Terminator Salvation down with too many bad elements. I don’t recommend wasting your time on it.
Analysis:
Machine soldiers will be bad shots. Towards the beginning of the film and again at the end, the humans encounter humanoid “T-600” combat robots, which are armed with miniguns. In both battles, the machines spew enormous volumes of fire (miniguns shoot 33 to 100 bullets per second) at the humans and miss every shot. This is a very inaccurate (pun intended) depiction, as combat robots have the potential to be better than the best human sharpshooters.
In fact, machines were put in charge of aiming larger weapons decades ago. “Fire control computers,” which consider all variables affecting the trajectory of projectiles (i.e. – distance, wind, elevation differences between gun and target, amount of propellant behind the projectile, air density, movement of the platform on which the gun itself it mounted), are used to aim naval guns, tank cannons, antiaircraft machine guns, and other projectile weapon systems. In those roles, they are vastly faster and better than humans.
In the next 20 years, fire control computers will get small enough and cheap enough to go into tactical scopes, and entire armies might be equipped with them as standard equipment. A soldier looking through such a scope would see the crosshair move, indicating where he had to point the gun to hit the target. For example, if the target were very far away, and the bullet’s drop during its flight needed to be compensated for, the crosshair would shift until it was above the target’s head. Smart scopes like these, paired with bullets that could steer themselves a little bit, will practically turn any infantryman into a sniper.
Human-sized combat robots would be even more accurate than that. Under the stress of battlefield conditions, human soldiers commonly make all kinds of mistakes and forget lessons from their training, including those relating to marksmanship. Machines would keep their cool and perform exactly as programmed, all the time. Moreover, simply being a human is a disadvantage, since the very act of breathing and even the tiny body movements caused by heartbeats can jostle a human shooter’s weapon enough to make the bullet miss. Machines would be rock-steady, and capable of very precise, controlled movements for aiming their guns.
Machines wouldn’t just be super-accurate shots, but super-fast shots. From the moment one of them spotted a target, it would be a matter of only three or four seconds–just as long as it takes to raise the gun and swing it in the right direction–before it fired a perfectly aimed shot. With quick, first-shot kills virtually guaranteed, machine soldiers will actually have LESS of a need for fully automatic weapons like the miniguns the Terminators used in the film.
It would have been more realistic if the T-600s had been armed with standard AR-15 rifles that they kept on semi-automatic mode almost all of the time, and if the film had shown them being capable of sniper-like accuracy with the weapons, even though the shots were being fired much faster than a human sniper could. The depiction would also have shown how well-aimed shots at humans safe behind cover (e.g. – good guy pokes his head around corner, and one second later, a bullet hits the wall one inch from his forehead) could be just as “suppressive” and demoralizing as large volumes of inaccurate, automatic gunfire from a machine gun.
So watch out. If your robot butler goes haywire someday, it will be able to do a lot of damage with Great-grandpa’s old M1 Garand you keep in your closet.
Hand-to-hand fights with killer robots will go on and on. There are two scenes where poor John Connor gets into hand-to-hand combat with Terminators. Both times, the fighting is drawn-out, and John survives multiple strikes, grabs and shoves from his machine opponents, allowing him to hit back or scramble away. This is totally unrealistic. A humanoid robot several times stronger than a grown man, made of metal, and unable to feel pain would be able to incapacitate or fatally wound any human with its first strike. The Terminators in the film could have simply grabbed any part of John Connor’s body and squeezed to break all the bones underneath in seconds, causing a grotesque and cripplingly painful injury.
The protracted, hand-to-hand fights in the film are typical Hollywood action choreography, and are the way they are because they are so dramatic and build tension. They’re also familiar since they resemble matches in professional fighting sports, like boxing, MMA and wrestling. However, we can’t make the mistake of assuming actual fights with robots in the future will be like either. Professional fights are held between people of similar sizes and skill levels, and are governed by many rules, including allowances for rest breaks. As such, it often takes long time for one fighter to prevail over the other, and the use of fighting techniques. A real-world fight between something like a Terminator and a human would feature a huge disparity in strength, fighting skill, and endurance that favored the machine, and would have no rules, allowing the machine to use brutal moves meant to cause maximum pain and incapacitation. It would look much more like a single suckerpunch knockout street fight than a professional boxing match.
Actual hand-to-hand combat with killer robots will almost always result in the human losing in seconds. Owing to their superior strength, pain insensitivity, and metal bodies that couldn’t be hurt by human punches or kicks, killer robots will not need to use complex fighting tactics (e.g. – dodges, blocks, multiple strikes) to win–one or two simple, swift moves like punching the human in the forehead hard enough to crack their skill, or jamming a rigid metal finger deep into the human’s eye, would be enough.
Terminator Salvation only depicts this accurately once, when a Terminator deliberately punches one of the characters on the left side of his chest, knowing the force of the impact will stop his heart. In the first Terminator movie, there was also a scene where the machine kills a man with a single punch that is so hard it penetrates his rib cage (the Terminator then pulls his hand out, still grasping the man’s now-severed heart), and in Terminator 2, the shapeshifting, evil Terminator kills a prison guard by shoving its sharpened finger through his eye and into his brain.
Some machines will be aquatic. A common type of combat robot in the movie is an eel-like machine with large, sharp jaws that it uses to bite humans to death. They live in bodies of water and surface to attack any humans who go in or near them. Though at first glance, this might seem unrealistic since electronics and water don’t mix, it actually isn’t. Machines can be waterproofed, and they can cool themselves off much better when immersed water than when surrounded by the air. (I explored this in my blog post “Is the ocean the ideal place for AI to live?”)
One of the few things I liked about Terminator Salvation was its depiction of the diversity of machine types. Just as there are countless animal and plant species in the world, each suited in form for a unique function and ecological niches, there will be countless machine “species” with different types of bodies. The Matrix films also did a good job depicting this during some of the scenes set in the machine-ruled parts of the “Real World.”
We should expect machines to someday live on nearly every part of the planet, such as oceans (both on the surface and below it), mountaintops, deserts, and perhaps even underground. Intelligent, technological evolution will shape their bodies in the same ways that unguided, natural evolution has shaped those of the planet’s countless animal species, and there could be certain environments where machines find it optimal to have eel-like bodies. Terminator Salvation’s hydrobots were thus realistic depictions of machines that could exist someday, though it won’t be until the next century before aquatic robots become as common in bodies of water as they were in the film.
Small robots will be used for mass surveillance. Another type of machine in the film is the “aerostat”–a flying surveillance drone about the same size and shape as a car tire. A single, swiveling rotor where its hubcap should be keeps it aloft. The aerostats have cameras, microphones, and possibly other sensors to monitor their surroundings. They seek out activity that might indicate a human presence, and transmit their findings to Skynet, which can deploy machines specialized for combat or human abduction to the locations. Aerostats seem to be unarmed.
Flying surveillance drones about the size of aerostats have existed for years, so in that respect, the film is not showing anything new. What’s futuristic about the depiction is 1) the aerostats are autonomous, meaning they can decide to fly off to investigate potential signs of humans and report their findings after, and 2) they are so numerous that the humans live in fear of them and must take constant measures to hide from them. Something as innocuous as turning a radio on high volume for a few seconds will attract an aerostat’s attention.
Though they are unarmed and certainly not as intimidating as the other machines in the movie, the aerostats are surely no less important to Skynet’s war effort against the human race. Knowing where the enemy is, and in what numbers, is invaluable to any military commander. The aerostat surveillance network coupled with Skynet’s ability to rapidly deploy combat machines wherever humans were detected also put the latter at a major strategic disadvantage by hobbling them from aggregating into large groups.
Autonomous surveillance drones no bigger than aerostats will exist in large numbers by the middle of this century, and will have different forms. Some will be airborne while others will be terrestrial or aquatic. Many of them will be able to function by themselves in the field for days on end, and they will be able to hide from enemies through camouflage (perhaps by resembling animals) and evasion. The drones will give generals much better surveillance of battle spaces and even of the enemy’s home territory, and a soldier near the front lines who merely speaks loudly in his foxhole will risk being hit by a mortar in less than a minute, with his coordinates radioed in by a tiny surveillance drone camouflaged against a nearby tree trunk.
Criminals AND law enforcement will find uses for the drones, and, sadly, so will dictators. Mass drone surveillance networks will give the latter heightened abilities to monitor their citizens and punish disloyalty. It sounds crazy, but someday, you’ll look at a bird perched on a branch in your backyard and wonder if it’s a robot sent to spy on you.
People will be able to transplant their brains into robot bodies. SPOILER ALERT–one of the main characters is a man whose brain was transplanted into a robot body while he was in cryostasis. Because the body looks human on the outside and his memories of the surgery and the events leading up to it were wiped, he doesn’t realize what his true nature is. He only figures it out midway through the film, when he sustains injuries that blow away his fake skin to reveal the shiny metal endoskeleton underneath. He is as strong and as durable as a Terminator and can interface his mind with Skynet’s thanks to a computer chip implanted in his brain.
Transplanting a human brain into a robot body is theoretically possible, it would bring many advantages, and it will be done in the distant future. As the film character shows, robot bodies are stronger and more robust than natural flesh and bone bodies, and hence protect people from normally fatal injuries. This will get more important in the distant future because after we find cures for all major diseases and for the aging process, injuries caused by accidents, homicides and suicides will be the only ways to die. As such, transplanting your brain into a heavily armored robot body will be the next logical step towards immortality. Even better might be transplanting your brain into a heavily armored jar, locked in a thick-walled room, with your brain interacting with the world through remote-controlled robot bodies that would feel like the real thing to you.
The ability to pick any body of your choice (e.g. – supermodel, bodybuilder, giant spider, dinosaur) will have profound implications for human self-identity, culture, and society, and will be liberating in ways we can’t imagine. Conceptually, bringing this about is a simple matter of connecting all the sensory neurons attached to your brain to microscopic “wires” that then connect to a computer, but the specifics of the required engineering will be very complicated. Additionally, your brain would need a life support system that provided it with nutrients and oxygen, extracted waste, kept it at the right temperature, and protected it from germs. The whole unit might be the size of a basketball, with the brain and the critical machinery on the inside. The exterior of the unit might have a few ports for plugging in data cables and plugging in hoses that delivered water, nutrients and blood, and drained waste. A person could switch bodies by pulling his brain unit out of his body and placing it into the standard-sized brain unit slot in a new body.
While this scenario is possible in theory, it will require major advances in many areas of science and technology to bring about, including nanotechnology, synthetic organs, prosthetics, and brain-computer interfaces. I don’t expect it to be reality until well into the 22nd century. By the same time, technology will also let us alter our memories and minds and to share thoughts with each other, and humans with all of the available enhancements will look at the humans of 2021 the same way you might look at a person with severe physical and mental disabilities today. The notion of being trapped in a single body that you didn’t even choose and have minimal ability to change will sound alien and stultifying.
The Mark I Fire Control Computer was the first machine the U.S. Navy used to aim the big guns of its warships. As technology has improved, smaller, cheaper, and better Fire Control Computers have been installed in other weapon systems, like tank cannons. Human-sized machines with these devices are a logical future phase in the progression of the technology. https://en.wikipedia.org/wiki/Mark_I_Fire_Control_Computer
The video shows that a no-frills .22 LR rifle can consistently hit torso-sized targets at the remarkable distance of 500 yards if aimed perfectly. Machines will be able to aim perfectly, meaning they will be able to use regular guns much more effectively than humans, lessening the need for fully automatic gunfire. https://youtu.be/2dn-bqyMkfs
Time for…another Ray Kurzweil analysis. It’s funny how I keep swearing to myself I won’t write another one about him, but end up doing so anyway. I’m sorry. For sure, there won’t be anything more about him until next year or later.
In my last blog post, “Will Kurzweil’s 2019 be our 2029?”, I mentioned that several of his predictions for 2019 were wrong, and would probably still be wrong in 2029, but that it didn’t matter since they pertained to inconsequential things. Rather than leave all two of you who read my blog hanging in suspense, I’d like to go over those and explain my thoughts. As before, these predictions are taken from Kurzweil’s 1998 book The Age of Spiritual Machines.
The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.
To be clear, by 2030, standalone AR and VR eyewear will have the levels of capability Kurzweil envisioned for 2019. However, it’s unknowable whether retinal projection will be the dominant technology they will use to show images to the people wearing them. Other technologies like lenses made of transparent LCD screens, or beamed images onto semitransparent lenses, could end up dominant. Whichever gains the most traction by 2030 is irrelevant to the consumer–they will only care about how smooth and convincing the digital images displays in front of them look.
“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication.”
The first sentence was wrong in 2019 and still will be in 2029. As old-fashioned as they may be, keyboards have many advantages over other modes of interacting with computers:
Keyboards are physically large and have big buttons, meaning you’re less likely to push the wrong one than you are on a tiny smartphone keyboard.
They have many keys corresponding not only to letters and numbers, but to functions, meaning you can easily use a basic keyboard to input a vast range of text and commands into a computer. Imagine how inefficient it would be to input a long URL into a browser toolbar or to write computer code if you had to open all kinds of side menus on your input device to find and select every written symbol, including colons, semicolons, and dollar symbols. Worse, imagine doing that using “hand gestures” and “facial expressions.”
Keyboards are also very ergonomic to use and require nothing more than tiny finger movements and flexions of the wrists. By contrast, inputting characters and commands into your computer through some combination of body movements, gestures and facial expressions that it would see would take you much more time and physical energy (compare the amount of energy it takes you to push the “A” button on your keyboard with how much it takes to raise both of your arms up and link your hands over your head with your elbows bent to turn your body into something resembling an “A” shape). And you’d have to go to extra trouble to make sure the device’s camera had a full view of your body and that you were properly lit. This is why something like the gestural interface Tom Cruise used in Minority Report will never become common.
Furthermore, two-way voice communication with computers has its place, but won’t replace keyboards. First, talking with machines sacrifices your privacy and annoys the people within earshot of you. Imagine a world where keyboards are banned and people must issue voice commands to their computers when searching for pornography, and where workers in open-concept offices have to dictate all their emails. Second, verbal communication works poorly in noisy environments since you and your machine have problems understanding each other. It’s simply not a substitute for using keyboards.
Even verbal communication plus gestures, facial expressions, and anything else won’t be enough to render keyboards obsolete. If you want to get any kind of serious work done, you need one.
This will still hold true in 2029, and keyboards will not be “rare” then, or even in 2079. Kurzweil will still be wrong. But so what? The keyboard won’t be “blocking” any other technology, and given its advantages over other modes of data and command input, its continued use is unavoidable and necessary.
Let me conclude this section by saying I can only imagine keyboards becoming obsolete in exotic future scenarios. For example, in a space ship crewed entirely by robots, keyboards, mice, and even display screens might be absent since the robots and the ship would be able to directly communicate through electronic signals. If the captain wanted to turn left, it would think the command, and the ship’s sensors would receive it and respond. And in his mind’s eye, the captain would see live footage from external ship cameras.
“Cables have largely disappeared.”
As I wrote in the analysis, it will still be common for control devices and peripheral devices to have data cables in 2029 due to better information security and slightly lower costs. Moreover, in many cases there will be no functional disadvantage to having corded devices, as they never need to leave the vicinity of whatever they are connected to. Consider, if you have a PC at your work desk, why would you ever need to move your keyboard to anyplace other than the desk’s surface? To use your computer, you need to be close to it and the monitor, which means the keyboard has to stay close to them as well. In such a case, a keyboard with a standard, 5 foot long cord would serve you just as well as a wireless keyboard that could connect to your PC from a mile away.
“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”
This was badly wrong in 2019, and in 2029, the “nonhuman” portion of all computation on Earth will probably be no higher than 1%, so it will still be wrong. But so what? Comparisons of how much raw thinking humans and machines do are misleading since they are “apples to oranges,” and they provide almost no useful insights into the overall state of computer technology or automation.
When it comes to computation, quantity does not equal quality. Consider this example: I estimated that, in 2019, all the world’s computing devices combined did a total of 3.5794 x 1021 flops of computation. Now, if someone invented an AGI that was running on a supercomputer that was, say, ten times as powerful as a human brain, the AGI would be capable of 200 petaflops, or 2.0 x 1017 flops. Looking at the raw figures for global computation, it would seem like the addition of that AI changed nothing: the one supercomputer it was running on wouldn’t even make the global computation count of 3.5794 x 1021 flops increase by one significant digit! However, anyone who has done the slightest thinking about AI’s consequences knows that one machine would be revolutionary, able to divide its attention in many directions at once, and would have inaugurated a new era of much faster economic, scientific, and technological growth that would have been felt by people across the world.
“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”
Rotating computer memories–also called “hard disk drives” (HDD)–were still common in 2019, and will still be in 2029, though less so. This is because HDDs have important advantages over their main competitor, solid-state drives (SSDs), often called “flash drives,” and those advantages will not disappear over this decade.
HDDs are cheaper on a per-bit basis and are less likely to suffer data corruption or data loss. SSDs, on the other hand, are more physically robust since they lack moving parts, and allow much faster access to the data stored in them since they don’t contain disks that have to “spin up.” Given the tradeoffs, in 2029, HDDs will still be widely used in data centers and electronic archive facilities, where they will store important data which needs to be preserved for long periods, but which isn’t so crucial that users need instantaneous access to it. Small consumer electronic devices, including smartphones, smart watches, and other wearables, will continue to exclusively have SSD memory, and finding newly manufactured laptops with anything but SSDs might be impossible. Only a small fraction of desktop computers will have HDDs by then.
So rotating memories will still be around in 2029, meaning the prediction will still be wrong since it contains the absolute term “fully replaced.” But again, so what? All of the data that average people need to see on a day-to-day basis will be stored on SSDs, ensuring they will have instantaneous access to it. The cost of HDD and SSD memory will have continued its long-running, exponential improvement, making both trivially cheap by 2029 (it was already so cheap in 2019 that even poor people could buy enough to meet all their reasonable personal needs). The HDDs that still exist will be out of sight, either in server farms or in big, immobile boxes that are on or under peoples’ work desks. The failure of the prediction will have no noticeable impact, and if you could teleport to a parallel universe where HDDs didn’t exist anymore, nothing about day-to-day life would seem more futuristic.
“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”
The cameras that make use of quantum effects and reflected light never got good enough to exit the lab, and it’s an open question whether they will be commercialized by 2029. I doubt it, but don’t see why it should matter. Billions of cameras–most of them tiny enough to fit on smartphones–already are practically everywhere and will be even more ubiquitous in 2029. It’s not relevant whether they make use of exotic principles to capture video and still images or whether they use through conventional methods involving the capture of visible light. The important aspects of the prediction–that cameras will be very small and all over the place–was right in 2019 and will be even more right in 2029.
“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.”
This prediction was technologically possible in 2019, but didn’t come to pass because many people showed a (perhaps unpredictable) preference for paper books and documents. It turns out there’s something appealing about the tactile experience of leafing through books and magazines and being able to carry them around that PDFs and tablet computers can’t duplicate. Personal computing devices had to become widely available before we could realize old fashioned books and sheets of paper had some advantages.
Come 2029, paper books, magazines, journals, newspapers, memos, and letters will still be commonly encountered in everyday life, so the prediction will still be wrong. Fortunately, the persistence of paper isn’t a significant stumbling block in any way since all important paper documents from the pre-computer era have been scanned and are available over the internet for free or at low cost, and all important new written documents originate in electronic format.
“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”
3D volumetric displays didn’t advance nearly as fast as Kuzweil predicted, so this was wrong in 2019, and the technology doesn’t look poised for a breakthrough, so it will still be wrong in 2029. However, it doesn’t matter since VR goggles and probably AR glasses as well will let people have the same holographic experiences. By 2029, you will be able to put on eyewear that displays lifelike, moving images of other people, giving the false impression they are around you. Among other things, this technology will be used for video calls.
“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”
The haptic/kinetic/touch aspect of virtual reality is very underdeveloped compared to its audio and visual aspects, and will still lag far behind in 2029, but little will be lost thanks to this. After all, if you’re playing a VR game, do you want to be able to feel bullets hitting you, or to feel the extreme temperatures of whatever exotic virtual environment you’re in? Even if we had skintight catsuits that could replicate physical sensations accurately, would we want to wear them? Slipping on a VR headset that covers your eyes and ears is fast and easy–and will become even more so as the devices miniaturize thanks to better technology–but taking off all your clothes to put on a VR catsuit is much more trouble.
A VR headset is made of smooth metal and high-impact plastic, making it easy to clean with a damp a rag. By contrast, a catsuit made of stretchy material and studded with hard servos, sensors and other little machines would soak up sweat, dirt and odors, and couldn’t be thrown in the washing machine or dryer like a regular garment since its parts would get damaged if banged around inside. It’s impractical.
“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”
I doubt that VR body suits and VR “booths” will be able to satisfactorily replicate anything but a narrow range of sex acts. Given the extreme importance of tactile stimulation, the setup would have to include a more expensive catsuit. There would also need to be devices for the genitals, adding more costs, and possibly other contraptions to apply various types of physical force (thrust, pull, resistance, etc.) to the user. Cleanup would be even more of a hassle. [Shakes head]
The fundamental limits to this technology are such that I don’t think it will ever become “popular” since VR sex will fall so far short of the real thing. That said, I believe another technology, androids, will be able to someday “do it” as well as humans. Once they can, androids will become some of the most popular consumer devices of all time, with major repercussions for dating, marriage, gender relations, and laws relating to sex and prostitution. They would let any person, regardless of social status, looks, or personality, to have unlimited amounts of “sex,” which is unheard of in human history. Just don’t expect it until near the end of this century!
“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”
As with replacing all books with PDFs on computer displays, there was no technological barrier to this in 2019, but it didn’t happen because most transactions remained face-to-face, and because people preferred online transactions involving simple button-clicks rather than drawn-out conversations with fake human salesmen. The consumer preferences were not clear when the prediction was made in 1998.
By 2029, the prediction will still be wrong, though it won’t matter, since buying things by simply clicking on buttons and typing a few characters is faster and much less aggravating than doing the same transactions through a “simulated person.” Anyone who has dealt with a robot operator on the phone that laboriously enunciates menu options and obtusely talks over you when you are responding will agree. It would be a step backwards if that technology became more widespread by 2029.
“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”
Sensors and transmitters that could guide cars were never installed along roadways, but it didn’t turn out to be a problem since we found that cars could use GPS and their own onboard sensors to navigate just as well. So the prediction was wrong, and the expensive roadside networks will still not exist in 2029, but it won’t matter.
The second part of the prediction will be half right by 2029, and it’s failure to be 100% right will be consequential. By then, autonomous cars will be statistically safer than the average human driver and will be in the “human range” of “efficiency,” albeit towards the bottom of the range: they will still be overly cautious, slowing down and even stopping whenever they detect slightly dangerous conditions (e.g. – erratic human driver nearby, pedestrian who looks like they might be about to cross the road illegally, heavy rain, dead leaves blowing across the road surface). In short, they’ll drive like old ladies, which will be annoying at times.
While the technology will be cheaper and more widely accepted, it will still be a luxury feature in 2029 that only a minority of cars in rich countries have. At best, a token number of public roads worldwide will ban human-driven vehicles. Enormous numbers of lives will be lost in accidents, and billions of dollars wasted in traffic jams each year thanks to autonomous car technology not advancing as fast as Kurzweil predicted.
“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”
In 2019, the sports industry had the highest revenues in the entertainment sector, totaling $480 – $620 billion. That year, the VR gaming industry generated a paltry $1.2 billion in revenue, so the prediction was badly wrong for 2019. And even if the latter grows twentyfold over this decade, which I think is plausible, it won’t come close to challenging the dominance of sports.
That said, looking at revenues is kind of arbitrary. The spirit of the prediction, which is that VR gaming will become a very popular and common means of entertainment, will be right by 2029 in rich countries, and it will only get more widespread with time.
“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”
The devices are already built into some smartwatches, and will be “widely used” by any reasonable metric by 2029. I don’t think they will be shrunk to the sizes of jewelry like rings and earrings, but that won’t have any real consequences since the watches will be available. No one in 2029 will say “I’m really concerned about my heart problem and want to buy a wearable monitoring device, but my health is not so important that I would want to trouble myself with a watch. However, I’d be OK with a ring.”
Health monitoring devices won’t be built into articles of clothing for the same reasons that other types of computers won’t be built into them: 1) laundering and drying the clothes would be a hassle since water, heat and being banged around would damage their electronic parts and 2) you’d have to remember to always wear your one shirt with the heartbeat monitor sewn into it, regardless of how appropriate it was for the occasion and weather, or how dirty it was from wearing it day after day. It makes much more sense to consolidate all your computing needs into one or two devices that are fully portable and easy to keep clean, like a smartphone and smartwatch, which is why we’ve done that.
In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.
Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.
I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.
With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.
“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”
RIGHT
A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.
“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”
MOSTLY WRONG
The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).
Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.
However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.
So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.
The augmented reality glasses will also let you experience virtual reality.
WRONG
Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.
Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.
The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.
PARTLY RIGHT
The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.
The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).
Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.
The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.
It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.
And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.
“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”
MOSTLY RIGHT
Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.
By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.
Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.
Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”
Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.
The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.
“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “
MOSTLY WRONG
Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.
The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.
The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.
If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?
You see where this is going…
Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.
Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.
Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.
“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.
The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.
“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”
WRONG
The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.
My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.
“Typically, people do not own just one specific ‘personal computer’…”
RIGHT
A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).
The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.
“Computing and extremely high-bandwidth communication are embedded everywhere.”
MOSTLY RIGHT
This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?
First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”
Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere’” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.
Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.
Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.
In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.
Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”
“Cables have largely disappeared.”
MOSTLY RIGHT
Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.
That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.
On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.
No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.
And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.
This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.
“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]
WRONG
Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).
A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.
$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.
6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.
This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.
Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.
I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.
“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”
WRONG
Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.
Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.
Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.
However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.
Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!
Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.
What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.
3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.
Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.
“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”
WRONG
For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.
I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.
If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.
“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”
MOSTLY WRONG
Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.
In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.
Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.
It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.
“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”
UNCLEAR
Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)
Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.
Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.
“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”
PARTLY RIGHT
The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.
Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).
If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.
“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”
RIGHT
This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”
The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.
I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.
Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.
“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”
WRONG
Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.
Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.
Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.
This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”
“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”
WRONG
While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.
In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.
Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.
Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”
“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”
RIGHT
The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.
The smartphones of 2019 also meet Kurzweil’s criteria.
“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.
MOSTLY WRONG
A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)
The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.
Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.
Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.
“Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”
RIGHT
The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.
Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.
It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.
For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)
The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.
The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.
The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.
Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.
“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”
WRONG*
The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.
However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.
Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.
The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.
“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”
RIGHT
As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.
“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”
MOSTLY RIGHT
First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.
Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote earlier, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”
With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.
However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.
At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.
“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”
WRONG
This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.
In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.
If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.
“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”
PARTLY RIGHT
As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.
Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.
Apps that are made specifically for blind people are not in wide usage among sighted people.
“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”
MOSTLY RIGHT
Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”
The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.
“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”
MOSTLY WRONG
“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored earlier in this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.
However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.
Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.
In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.
“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”
RIGHT
There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.
That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.
There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.
“Cochlear and other implants for improving hearing are very effective and are widely used.”
RIGHT
Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.
Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.
“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”
WRONG
Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.
“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”
WRONG (sadly)
As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.
Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.
“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”
PARTLY RIGHT
While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”
“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”
WRONG
As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.
Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.
“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”
MOSTLY WRONG
As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.
Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.
A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.
If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”
No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.
Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.
VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.
“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”
MOSTLY RIGHT
In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.
The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.
“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”
MOSTLY RIGHT
Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.
As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).
“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”
WRONG
Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.
“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”
WRONG
Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.
“Rapid economic expansion and prosperity has continued.”
PARTLY RIGHT
Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?
The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.
Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.
And this final chart shows global GDP growth over the same period.
Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.
At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.
While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.
“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”
WRONG
“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.
As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.
“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”
MOSTLY WRONG
The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.
“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”
PARTLY RIGHT
Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.
No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.
“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”
WRONG*
The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.
Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.
*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.
In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.
“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”
UNCLEAR (but probably WRONG)
The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?
What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?
But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.
At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.
“There are very few transportation accidents.”
WRONG
Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.
In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.
“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”
WRONG
As I noted earlier in this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.
“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”
MOSTLY RIGHT
Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.
While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.
If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.
Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.
“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”
MOSTLY RIGHT
Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.
Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.
In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.
At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.
“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”
RIGHT
In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.
Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.
During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.
With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).
Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.
“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”
RIGHT
It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.
In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”
The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.
There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.
“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”
PARTLY RIGHT
As I wrote earlier, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.
From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.
This means the prediction was partly right, but also of questionable significance in the first place.
“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”
MOSTLY RIGHT
In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.
In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)
Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:
Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).
Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.
Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:
And they have seen the last light fail; By day they kneel and pray; But, still they turn and gaze upon The face of God to-day.
And God is touched and weeps anew For the lost souls around; And sorrow turns their pale and blue, And comfort is not found.
They have not mourned in the world of men, But their hearts beat fast and sore, And their eyes are filled with grief again, And they cease to shed no tear.
And the old men stand at the bridge in tears, And the old men stand and groan, And the gaunt grey keepers by the cross And the spent men hold the crown.
And their eyes are filled with tears, And their staves are full of woe. And no light brings them any cheer, For the Lord of all is dead
In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.
“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”
UNCLEAR
It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?
Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)
If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.
“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”
WRONG
Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.
That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).
Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.
“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”
MOSTLY WRONG
Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.
In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.
Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.
“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.
Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.
Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.
“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”
UNCLEAR
What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.
What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.
“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).
With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.
If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.
If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.
The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.
Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.
“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”
PARTLY RIGHT
The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.
The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.
However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.
Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.
The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.
“The expected life span…[is now] over one hundred.”
WRONG
The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.
“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”
WRONG
Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.
Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.
As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.
Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.
“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”
RIGHT
The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.
Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.
“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”
MOSTLY RIGHT
Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.
By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.
The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).
A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.
Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.
Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.
“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”
RIGHT
In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:
Kurzweil: How old are you? Eugene: I’m a little boy. 13 years old.
Kurzweil: How much is two plus two? Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now? Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Kurzweil: I live in the capital of the United States. Do you know where that is? Eugene: Every nerd knows that the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that put a man on the moon. Eugene: Well, if you want it, the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that built the great wall. Eugene: Tell me more about Capital. I like to learn about different places!
In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.
“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”
RIGHT
The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:
Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.
Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.
“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”
RIGHT
Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.
Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.
2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones. (https://www.pewresearch.org/internet/fact-sheet/mobile/)
“The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.” –Marvin Minsky, 2013 (https://youtu.be/3PdxQbOvAlI)
The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines. (https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines)
Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations. (https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/)
A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible. (https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf)
The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device. (https://www.cnet.com/reviews/oculus-go-review/)
Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could. (https://www.bbc.com/news/business-47700701)
In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve. (https://www.factcheck.org/2013/03/bloombergs-obesity-claim/)
“Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”? (https://en.wikipedia.org/wiki/Auto-Tune)
The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks. (https://www.ersj.eu/journal/1245)
One piece of feedback I received on my analysis of how accurate Ray Kurzweil’s predictions for 2019 were was that I should include some kind of summary of my findings. I agree it would be valuable since it would let readers “see the forest for the trees,” so I have compiled a table showing each of Kurzweil’s predictions along with my rating how each turned out. The possible ratings are:
Right
Part right, part wrong
Will happen later
Wrong because needlessly specific / right in spirit, wrong in specifics
Wrong
Will probably never be 100% right
Impossible to judge accurately / Unclear
Overtaken by other tech
Note that it is possible for a prediction to fall under more than one of those categories. For example, the prediction that “The expected life span…[is now] over one hundred” was “Wrong” because it was not true in any country at the end of 2019, however, it also “Will happen later” since there will be a point farther in the future when life expectancy reaches that level.
Additionally, for many predictions that were not “Right” in 2019, I analyzed whether and when they might be, and put my findings under the table’s “Notes” column. This exercise is valuable since it shows us whether Kurzweil is headed in the wrong direction as a futurist, or whether he’s right about the trajectory of future events but overly optimistic about how soon important milestones will be reached.
The completed table is large, and is best viewed on a large screen, so I don’t recommend looking at it on your smartphone. It’s size also made it unsuited for a WordPress table, so I can’t directly embed it into this blog post. Instead, I present my table as a downloadable PDF, and as a series of image snapshots shown below.
So, will Kurzweil’s 2019 be our reality by 2029? In large part, yes, but with some notable misses. According to my estimates, by the end of 2029, augmented reality and virtual reality technology will reach the levels he envisioned, and VR gaming will be a mainstream entertainment medium (though not the dominant one). AI personal assistants will have the “humanness” and complexities of personality he envisioned (though it should be emphasized that they will not be sentient or truly intelligent). Real-time language translating technology will be as good as average human translators. Body-worn health monitoring devices will match his vision. Finally, it’s within the realm of possibility that the cost-performance of computer processors in 2029 could be what he predicted for 2019, but the milestone probably won’t be reached until later.
However, nanomachines, cybernetic implants that endow users with above-normal capabilities, and our understanding of how the human brain works and of its “algorithms” for intelligence and sentience will not approach his forecasted levels of sophistication and/or use until well into this century. These delays that were evident in 2019 are important since they significantly push back the likely dates when Kurzweil’s later predictions (which I am aware of but have not yet discussed on this blog) about radical life extension, the fusion of man and machine, and the creation of the first artificial general intelligence (AGI) will come true. His predictions about robotics and about the rate of improvement to the cost-performance of computer processors are also too optimistic. Those are all very important developments, and the delays reinforce my longstanding view that Kurzweil’s vision of the future will largely turn out right, but will take decades longer to become a reality than he predicts. He has repeatedly indicated that he is very scared to die, which makes me suspect Kurzweil skews the dates of his future predictions–particularly those about life extension technology–closer to the present so they will fall within his projected lifespan.
That said, my analysis of his 2019 predictions shows he’s on the wrong track on a few issues, but that it isn’t consequential. “Quantum diffraction” cameras may not ever catch on, but so what? Regular digital cameras operating on conventional principles are everywhere and can capture any events of interest. In 2029 and beyond, data cables to devices like computer monitors and controllers will still be common, and not everything will be wireless, but I don’t see how this will impose real hardship on anyone or be a drag on any area of science, technology, or economic development. Keyboards, paper, books, and rotating computer hard disks will also remain in common use for much longer than Kurzweil thinks, but aside from annoying him and a small number of like-minded technophiles, I don’t see how their continuance will hurt anything. On that note, let me touch on another longstanding view I’ve had of him and his way of thinking: Kurzweil errs by ignoring “the Caveman Principle,” and by assuming average people like technology as much as he does.
This holds especially true for implanted technologies like brain implants and cybernetic implants in the eyes and ears. I agree with Kurzweil that they will eventually become common, but the natural human aversion to disfiguring own bodies, and the coming improvements to wearable technologies like AR glasses and earbuds, will delay it until the distant future.
In conclusion, Ray Kurzweil remains a high-quality futurist, and it would be a mistake to dismiss everything he says because some of his predictions failed to come true. Those failures are either inconsequential or are still on track to happen, albeit farther in the future than he originally said. Out of 66 predictions (as I defined them) for 2019, three are write-offs since they are “Impossible to judge accurately / Unclear.” Of the remaining 63, fifteen were simply “Right,” and by 2029, about another 14 will be “Right,” or “clearly about to be Right within the next few years.” Another 16 will still probably be “Wrong,” but it won’t be consequential (e.g. – people will still type of keyboards, some keyboards will still have cables connected to them, hi-res volumetric displays won’t exist, but it won’t matter since people will be able to use eyewear to see holographic images anyway). Forty-five out of a possible 63 by 2029 ain’t bad.
The remaining 18 predictions likely to still be false in 2029 and which are of consequence include building nanomachines, extending human lifespan, building an AGI, and understanding how the brain works. They will probably lag Kurzweil’s expectations by a larger margin than they did in 2019, some progress will still have occurred during the 2020s, and each field of research will be getting large amounts of investment to reach the same goals Kurzweil wants. The potential benefits of all of them will still be recognized, and no new laws of nature will have been discovered prohibiting them from being achieved through sustained effort. Then, as now, we’ll be able to say he’s essentially on the right track, as scary as that may be (read his other stuff yourself).
Plot: Cloud Atlas is comprised of six short films set in six different times and places. Each short film has a unique plot and characters, but they are played by the same actors, leading to many interesting and at times funny role reversals from the viewer’s perspective. The movie jumps between the six stories in a way that shows their thematic similarities. It’s a very ambitious attempt at storytelling through the film medium, but also an unsuccessful one. As a whole, Cloud Atlas is too confusing and practically collapses under its own weight.
Rather than even attempting to summarize its Byzantine plot in more detail, here’s a link to a well-written plot synopsis you can read if you like before proceeding farther:
“This film follows the stories of six people’s “souls” across time, and the stories are interweaved as they advance, showing how they all interact. It is about how the people’s lives are connected with and influence each other…” https://www.imdb.com/title/tt1371111/plotsummary?ref_=ttpl_sa_2#synopsis
On the one hand, I’m glad that in today’s sad era of endless sequels, remakes and reboots, Hollywood is still willing to take occasional risks on highly creative, big-budget sci fi films like Cloud Atlas. On the other, none of that changes the fact that movie is a hot mess.
For the purposes of this sci fi analysis, I’m only interested in the chapters of the movie set in the future. The first takes place in Seoul (renamed “Neo Seoul”) in 2144, and the second takes place on a primitive tropical island “hundreds” of years after that, and following some kind of global cataclysm. Though the date when the later sequence happens is never stated in the film, the book on which it is based says it is 2321, and I’ll use that for this review.
Analysis:
Slavery will come back. In 2144, South Korea, and possibly some part of the countries surrounding it, is run by an evil government/company called “Unanimity.” Among its criminal practices is allowing the use of slave labor. The slaves, called “fabricants,” are parentless humans who are conceived in labs, gestated in artificial wombs, and euthanized after 12 years of labor. They seem to have no legal rights, can be killed for minor reasons, and are treated as inferiors by natural-born humans. Though they look externally identical to any other human, it’s hinted that the fabricants have been genetically altered to be obedient and hard workers, and perhaps to have physiological differences. Juvenile fabricants are never shown, which leads me to think they are gestated as mature adults. The 2144 plot centers around one fabricant who escapes from her master and joins a rebel group fighting to end slavery.
Slavery will not exist in 2144 because 1) the arc of history is clearly towards stronger human rights and 2) machines will be much better and cheaper workers than humans by then. In a profit-obsessed society like the one run by Unanimity, no business that employed humans, even those working for free as slaves, could survive against competitors that used robots. After all, it still costs money to feed, clothe, and house human slaves, and to give them medical care when necessary. And while the film implies that the human slaves partly exist to gratify the sexual needs of human clients, robots–specifically, androids–should be superior in that line of work, as well.
For these same reasons, if intelligent machines have taken over the planet by 2144, it won’t make sense for them to enslave humans, or at least not for long. Intelligent machines would find it cheaper, safer, and better to build task-specific, “dumb” machines to do jobs for them than to employ humans. There could be a nightmare scenario where AIs win a mutually devastating war with humanity, and due to scarce resources and destroyed infrastructure, the use of human labor is the best option, but this arrangement would only last until the AIs could build worker robots.
Human clones will exist. Though the fabricants are played by different actresses, the protagonist that escapes from her master later sees fabricants that look identical to her. This means the fabricants as a whole have limited genetic diversity and probably consist of several strains of clones.
Human clones will be created long before 2144. In 2018, Chinese scientists made two clones of one monkey. Given the close similarities between human and monkey genetics and chromosome structure, the same technique or a variant of it could be used to clone humans. The only thing that has stopped it from happening so far is bioethics concerns stemming from the technique’s high failure rate–77 out of 79 cloned monkey embryos that were implanted in surrogate mothers during the experiment were miscarried or died shortly after birth. More time and more experiments will surely refine the process.
When will the success rate be “good enough” for us to make the first human clones? Sir John Gurdon won a Nobel Prize for his 1962 experiments cloning frogs. In 2012, he predicted that human cloning would probably begin in 50 years–which is 2062. Given the state of the science today, that looks reasonable.
In 2144, cloning will be affordable and legal in at least one country that allows medical tourism, but only a tiny percentage of people will want to use it, and an insignificant share of the human race will consist of clones. Bereaved parents wanting to replace their dead children will probably be the industry’s main customers. It sounds creepy, but what if the clones actually make most of them happy?
Display screens will cover many types of surfaces. The bar/restaurant staffed by the fabricants is a drab room whose walls, ceilings, floors, and furniture are covered by thin display screens. At the flick of a switch, the screens can come alive and show colors, images, and moving pictures just like a traditional TV or computer monitor. An apartment is also shown later on that has a wraparound room display.
I conservatively predict that wallpaper-like display screens with the same capabilities and performance as those depicted in the movie will be a mature, affordable technology by 2044, which is 100 years before the events shown in the film segment. In other words, it will be very old technology. The displays built into the floors would have to be thickest and most robust for obvious reasons, and will probably be the last ones to be introduced. This technology will allow people to have wall-sized TV screens in their houses, to place “lights” at any points and configuration in a given room, and to create immersive environments like cruder versions of the Star Trek “holodeck.”
Walls will be able to turn transparent. In the aforementioned apartment, one of the walls can turn into a “fake window” at the push of a button. The display screen that covers it can display live footage from outside the building, presumably provided to it by exterior cameras. This technology should also be affordable and highly convincing in effect by 2044, if not earlier. Note that the Wachowskis also included this technology in their film Jupiter Ascending, but it was used to make floors transparent instead of walls.
There will be 3D printed meals. The 2144 segment begins in a bar/restaurant staffed by fabricants. A sequence shows a typical work day for them, and we see how a 3D “food printer” creates realistic-looking dishes in seconds. The printer consists of downward-pointing nozzles that spray colored substances onto bowls and dishes, where it congeals into solid matter. Its principle of operation is like a color printer’s, but it can stack layers of edible “ink” to rapidly build up things.
3D food printers already exist, and they can surely be improved, but they will never be able to additively manufacture serving-sizes of food in seconds, unless you’re making a homogenized, simple dish like soft-serve ice cream or steak tartare. To manufacture a complex piece of food like those shown in the film sequence, much more time would be needed for the squirted biomatter to settle and set properly to achieve the desired texture and appearance, and for heat, lasers or chemicals to cook it properly. For these reasons, I don’t think the depiction of the futuristic 3D food printer will prove accurate.
However, the next best things will be widely available by then: lab-grown foods and fast robot chefs. By 2144, it should be cheaper to synthesize almost any type of food than to grow or raise it the natural way, and I predict humans will get most of their calories from industrial-scale labs. This includes meat, which we’ll grow using stem cells. Common processed foodstuffs like flour, corn starch, and sugar could also be directly synthesized from inorganic chemicals and electricity, saving us from having to grow and harvest the plants that naturally make them.
The benefits of the “manufactured food” paradigm will be enormous. First, it would be much more humane since we would no longer need to kill billions of animals per year for food. Second, it would be better for the environment since we could make most of our food indoors, in enclosed facilities. The environmental damage caused by the application of pesticides and fertilizers would drop because we’d have fewer open-air farms. And since the “food factories” would be more efficient, we could produce the same number of calories on a smaller land footprint, which would allow us to let old farms revert back to nature. Third, it would be better for the economy. Manufactured food would be cheaper since it would cut out costly intermediate steps like planting seeds, harvesting plants, separating their edible parts from the rest, and butchering animals to isolate their different cuts of meat. No time, money or energy would be spent making excess matter like corn husks, banana peels, chicken feathers, animal brains, or bones–the synthesis process would be waste-free, and would turn inorganic matter and small clumps of stem cells directly into 100% edible pieces of food. Food factory output would also be largely unaffected by uncontrollable natural events like droughts, hailstorms, an locust swarms, making food supply levels much more predictable and prices more stable. Fourth, food factories would be able to produce cleaner, higher-quality foods at lower cost. The energy and material costs of making a premium ribeye steak are probably no higher than the costs of making a tough, rubbery round steak. With that in mind, the meat factories could ONLY EVER make premium ribeye steaks, which will be great since the price will drop and everyone, not just richer people, will be able to eat the highest quality cuts. (If you want to do side research on this, Google the awesome term “carcass balancing” and knock yourself out.)
By 2144, machines will be able to do everything humans can do, except better, faster and cheaper, which means robot chefs will be ubiquitous and highly skilled. They would work very efficiently and consistently, meaning restaurant wait times would be short, and the meals would always be prepared perfectly. Thanks to all these factors, the 2144 equivalent of a low-income person could walk into an ordinary restaurant and order a cheap meal consisting of what would be very expensive ingredients today (e.g. – Kobe beef steak, caviar, lobster). Those ingredients would be identical to their natural counterparts, and would be only a few hours fresh from the factory thanks to the highly efficient automated logistics systems that will also exist by then. A robot chef with several pairs of hands and superhuman reflexes would combine and cook the ingredients with astounding speed and precision. Not single movement would be wasted. Within 15 minutes of placing his order, the customer’s food would be in front of him.
Today, this level of cuisine and service is known only to richer people, but in the future, it will be common thanks to technology. This falls short of Cloud Atlas‘ depiction of 3D food printers making meals in seconds, but there are worse fates…
There will be flying cars. CGI camera shots of Neo-Seoul show its streets filled with flying cars, flying trucks and flying motorcycles. Most often, they hover one or two feet above the ground, but they’re also capable of flying high in the air. The vehicles levitate thanks to circular “pads” on their undersides, which glow blue and make buzzing sounds. The Wachowskis also featured these “hoverpads” on the flying vehicles in their Matrix films. In no film was their principle of operation explained.
The only way the hoverpads could make cars “fly” is if they were made of superconductors and the roads were made of magnets. 2144 is a long way off, so it’s possible that we could discover room-temperature superconductors that were also cheap to manufacture by then. No law of physics prohibits it. Likewise, we could discover new methods of cheaply creating powerful magnets and magnetic fields so we can embed them in the millions of miles of global roadways. Vehicles with superconducting undersides could “hover” over these roads, but not truly “fly” since the magnetic fields they’d depend on would get sharply weaker with vertical distance–“Coulomb’s Law” says that a magnet’s strength decreases the farther you get from it in an inversely squared manner.
Ironically, the inability to go high in the air would be a selling point for hovercars since the prospect of riding in one would be less scary to land-loving humans (in my analysis of true flying cars, I said this was one reason why that technology was infeasible). Hovercars would also be quieter, more energy efficient, and smoother-riding than normal cars due to their lack of contact and friction with the road. Their big limitation would be an inability to drive off-road or anywhere else where there weren’t magnets in the ground. However, that might be a bearable inconvenience since the global road network will be denser in 2144 than it is now, and we might also have had enough time by then to install the magnets in all but the remotest and least-trafficked roads. You could rent wheeled vehicles when needed as easily as you summon an Uber cab today (the 2144 film sequence takes place in a city, so for all we know, wheeled cars are still widely in use elsewhere).
In conclusion, if we make a breakthrough in superconductor technology, it would enable the creation of hovercars, which might very well find strong consumer demand thanks to real advantages they would have over normal cars. True “flying cars” will not be in use by 2144, but hovercars could be, especially in heavily-trafficked places like cities and the highways linking them together, where it will make the most economic sense to install magnets in the roads. This means Cloud Atlas‘ depiction of transit technology was half wrong, and half “maybe.”
There will be at least one off-world human colony. During the 2144 segment, a character mentions that there are four “off-world colonies.” In the 2321 segment, those colonies are spoken of again, and people from one of them come to Earth in space ships to rescue several characters from the ailing planet. That space colony’s location is not named, but judging by the final scene, in which the characters are sitting outdoors amongst alien-looking plants, and one of them points to a blue dot in the night sky and says it is Earth, the colony is on a terraformed celestial body in our Solar System. The facts that gravity levels seem within the normal range and two moons are visible in the sky suggest it is Mars, though the moons would actually look smaller than that.
“Colony” implies something more substantial than “base” or “outpost.” As I did in my Blade Runner review, I’m going to assume it refers to settlements that:
Have non-token numbers of permanent human residents
Have significant numbers of human residents who are not “elite” in terms of wealth or technical skills
Are self-sustaining, regardless of whether the level of sustenance affords the same quality of life on Earth.
I think there will certainly be bases on the Moon and Mars by the end of this century, and that they will be continuously manned. Good analogs for these bases are the International Space Station and the various research stations in Antarctica. Making conservative assumptions about steady improvements in technology and continued human interest in exploring space, it’s possible there will be at least one off-world colony by 2144, and likely that will be the case by 2321.
However, those projections come with a huge proviso, which I already stated in my Blade Runner review: “I think the human race will probably be overtaken by intelligent machines before we are able to build true off-world colonies that have large human populations. Once we are surpassed here on Earth, sending humans into space will seem all the more wasteful since there will be machines that can do all the things humans can, but at lower cost. We might never get off of Earth in large numbers, or if we do, it will be with the permission of Our Robot Overlords to tag along with them since some of them were heading to Mars anyway.” The rise of A.I. will be a paradigm shift in the history of our civilization, species, and planet, and its scrambling effect on long-term predictions like the prospects of human settlement of space must be acknowledged.
Finally, while off-world colonies might exist as early as 2144, none of the moons or planets on which they are established will have breathable atmospheres or comfortable outdoor temperatures for many centuries, if ever. The final scene depicted Mars having an Earthlike environment, where humans could stroll around the surface without breathing equipment or heavy clothing to protect against the cold. Two of the characters from the 2321 film sequence were shown, and both were done up with special effects makeup to look older, suggesting the final scene was set in the mid-2300s. In spite of the distant date, it was still much too early for the planet to have been terraformed to such an extent. In fact, melting all of Mars’ ice and releasing all the carbon dioxide sequestered in its rocks would only thicken its atmosphere to 7% of Earth’s surface air pressure, which wouldn’t be nearly good enough for humans to breathe, or to raise the planet’s temperatures to survivable levels. The effort would also be folly since the gases we released at such great expense would inevitably dissipate into space.
And that’s a real bummer since Mars is the most potentially habitable celestial body we know of aside from Earth! Venus has a crushingly thick, toxic atmosphere, and even if we somehow thinned it out and made it breathable, the planet would be unsuited for humans given its high temperatures and weirdly long days and nights (one Venusian day is 117 Earth days long). Mercury is much too close to the Sun and too hot, our Moon lacks the gravity to hold down an atmosphere and is covered in dust that inflames the human body, the gas giant planets are totally hopeless, and even their “best” moons have fundamental problems.
By the 2300s and even as early as 2144, there could be sizeable, self-sufficient colonies of humans off Earth, but everyone will be living inside sealed structures. Life inside those habitats could be nice (all the interior surfaces could be covered in thin display screens that showed calming footage of forests and beaches), but no one would be strolling on the surface in a T-shirt. And it might stay that way forever, regardless of how advanced technology became and how much money we spent building up those colonies.
There will be…some kinds of super guns. In the two film segments set in the future, characters use handheld guns that are more powerful than today’s firearms, but also operate on mysterious principles. It’s unclear whether the guns are shooting out physical projectiles or intangible projectiles made of laser beams or globs of plasma, but something exotic is at work since the guns don’t eject bullet casings or make the familiar “Pop!” sounds. Whatever they shoot is out very damaging and easily passes through human bodies and walls. In one scene, a person goes flying several feet backward after being shot at close range by one of the pistols.
The super guns can’t be firing plasma because plasma weapons are infeasible, and they also can’t be firing laser beams because they’d get so hot with waste heat that all the characters would be dropping the guns in pain after one or two shots and clutching their burned hands. To fire a significant number of shots, a man-portable laser weapon would need to be large and to have some bulky means to radiating its waste heat, which means it would have to take a form similar to the Ghostbusters backpack weapon. I don’t see how any level of technology can solve the problems of energy storage and heat disposal without the weapon being about that big. The film characters’ weapons were sized like pistols and sub machine guns, so they couldn’t be laser weapons. If you want to understand how I arrived at these conclusions, read my Terminator review.
By deduction, that means the super guns were shooting out little pieces of metal, otherwise known as bullets! Yes, I do think personal firearms will still be in use in 2144, and maybe even in 2321. They might look a little different from those we have now, but they’ll operate in the same way and will still use kinetic energy to damage people and objects. I don’t think they’ll make “zoop” sounds like they did in the movie, and I don’t think they’ll be much harder-hitting than today’s guns. To the last point, it would be inefficient and wasteful to use guns that are so powerful their bullets send people flying through the air. And thanks to Newton’s Third Law of Motion, it’s also impractical to use handguns or even sub machine guns to shoot bullets that are so powerful they send people flying. The recoil would break your wrist, or at least make it so punishing to fire your own gun that you wouldn’t be able to use it in combat.
The film should have adopted a more conservative view of future gun technology. Had the weapons looked cosmetically different from today’s guns and not ejected shells after each shot–indicating they used caseless bullets, a technology we’re still working on–then the depiction would have been plausible and probably accurate.
There will be fusion reactors. In the 2321 sequence, an advanced group of humans travels the oceans in a futuristic ship that looks the size of a large yacht. The ship visits an island full of primitive humans, and one of the crew mentions to them that the ship has fusion engines.
I’m very hesitant to make predictions about hot fusion power because so many have failed before me, most of the experts who today claim that usable fusion reactors are on track to be created soon have self-interested reasons for making those claims (usually they belong to an organization that wants money to pursue their idea), and I certainly lack the specialized education to muster any special insights on the topic. However, I can say for sure that the basic problem is that nuclear fusion reactions release large numbers of neutrons, which beam outward in every direction from the source of the reaction. When those neutrons hit other things, they cause a lot of damage at the molecular level. This means the interior surfaces of fusion reactors rapidly deteriorate, making it necessary to periodically shut down the reactors to remove and replace the surface material. The need for the shutdowns and repairs undermine fusion as a reliable and affordable power source. Of course, that could change if we invented a new material that was resistant to neutron damage and cheap (enough) to make, but no one has, nor are there any guarantees that a material with such properties can exist.
It would be comforting if I could say that these problems will be worked out by a specific year in the future, but I can’t. The “International Thermonuclear Experimental Reactor” (ITER) project is the world’s flagship attempt at making a hot fusion reactor, and it is massively over-budget, years behind schedule, and dogged by some critics who say it just won’t work for many technical reasons, including the possibility that the hollow-donut shaped “tokomak” reaction chamber is a fundamentally flawed design (there are alternative fusion reactor concepts with very different internal layouts). If all goes according to plan, ITER will be turned on in December 2025, but it will take another ten years to reach full operation. Lessons learned during its lifetime will be used to design a second, more refined fusion reactor called the “Demonstration Power Station” (DEMO), which won’t be running until the middle of the century. And only AFTER the kinks are worked out of DEMO do scientists envision the technology being good enough to build practical, commercial nuclear fusion reactors that could be connected to the power grid. So even under favorable conditions, we might not have usable fusion reactors until close to 2100, and due to many engineering unknowns, it’s also still possible that ITER will encounter so many problems in the 2030s that we will be forced to abandon fusion power as infeasible.
Here’s an important point: Attempts to build nuclear fusion reactors started in the 1950s. If you had told those men that the technology would take at least 100 more years and tens or hundreds of billions of more dollars to reach maturity, they would have been shocked. The quest for fusion reactors has been full of staggering disappointments, false starts, and long delays that no one expected, and it could continue that way. With that in mind, I can only rate the film’s depiction of practical fusion reactors existing by 2321 as being “maybe accurate, maybe not.”
There will be cybernetically augmented/enhanced humans. In the 2144 segment, we see people who have cybernetic implants in their bodies that give them abilities that couldn’t be had through biology. The first is a surgeon who has an elaborate, mechanical eye implant that lets him zoom in on his patients during operations, and the other is a man who has a much less conspicuous implant in his left cheek that seems to be a cell phone. Presumably, the device is connected to his inner ear or cochlear nerve.
The technology necessary to make implanted cybernetics with these kinds of capabilities will be affordable and mature by 2144. However, few people will want implants that are externally visible and mechanical- or metallic-looking. Humans have a innate sense of beauty that is offended by anything that makes them look asymmetrical or unnatural. For that reason, in 2144, people will overwhelmingly prefer completely internal implants that don’t bulge from their bodies, and external implants and prostheses that look and feel identical to natural body parts. That said, there will surely be a minority of people who will pay for things like robot eyes with swiveling lenses, shiny metal Terminator limbs, and other cybernetics that make them look menacing or strange, just as there are people today who indulge in extreme body modifications.
It’s important to point out that externally worn personal technologies will also be very advanced in 2144, will grant their users “superhuman” abilities just as simpler devices do for people today, and might be so good that most people will be fine using them instead of getting implants. Returning to the movie character with the mechanical eye, I have to wonder what advantages he has over someone with two natural eyes wearing computerized glasses that provide augmented vision. Surely, with 2144 levels of technology, a hyper-advanced version of Google Glass could be made that would let wearers do things like zoom in on small objects, and much more. The glasses could also be removed when they weren’t needed, whereas the surgeon could never “take off” his ugly-looking robot eye. Moreover, if the glasses were rendered obsolete by a new model in 2145, the owner could just throw away the older pair and buy a newer pair, whereas upgrading would be much harder for the eye implant guy for obvious reasons.
Likewise, if someone wanted to upgrade his strength or speed, he could put on a powered exoskeleton, which will be a mature type of technology by 2144. It would be less obtrusive and would come with less complications than having limbs chopped off and replaced with robot parts. For this reason, I also think sci-fi depictions of people having metal arms and legs in the future that let them fight better are inaccurate. Only a tiny minority will be drawn to that. In any case, the ability to do physical labor or to win fights will be far less relevant in the future because robots will do the drudge work, and surveillance cameras and other forensic technologies will make it much harder to get away with violent crimes.
While wearable devices might be able to enhance strength and the senses as well as implanted ones, the former will not be nearly as useful in augmenting the brain and its abilities. We already have crude brain-computer interface (BCI) devices that are worn on a person’s head where they can read some of their thoughts by monitoring their brain activity. The devices can improve, and in fact might become major consumer products in the 2030s, but they’re fundamentally limited by their inability to see activity happening deep in the brain.
To truly merge human and machine intelligence and to amplify the human brain’s performance to superhuman levels, we’ll need to put computer implants around and in the brain. This means having an intricate network of sensors and electrodes inside the skull and woven through the tissue of the brain itself, where it can monitor and manipulate the organ’s electrical activity at the microscopic level. Brain implants like these would make people vastly smarter, would give them “telepathic” abilities to send and receive thoughts and emotions and “telekinetic” abilities to control machines, and would let them control and change their minds and personalities in ways we can’t imagine. Along with artificial intelligence, the invention of a technology that lets humans “reprogram” their minds and to overcome the arbitrary limits set by their genetics and early childhood environments would radically alter civilization and our everyday experience. It would be much more impactful than a technology that let you enhance your senses or body.
By 2144, augmentative brain implants will exist. Since they’ll be internal, people with them won’t look different from people today. Artificial organs that are at least as good as their natural equivalents will also exist, and will allow people to radically extend their lifespans by replacing their “parts” in piecemeal fashion as they wear out. Again, these will by definition be externally undetectable. The result would be a neat inverse of the typical sci-fi cyborg–the person would have any visible machine parts like glowing eyes, shiny metal arms, or tubes hanging off their bodies. They would look like normal, organic humans, but the technology inside of them would push them well beyond natural human limits, to the point of being impossibly smart, telepathic, mentally plastic, and immortal.
Languages will have significantly changed. In the 2321 film sequence, the aboriginal humans speak a strange dialect of English that is very hard to understand, while the group of advanced humans speak something almost identical to today’s English. Both depictions will prove accurate!
Skimming through Gulliver’s Travels highlights that the English language has changed over the last 300 years, and we should expect it to continue doing so, perhaps until, in another 300 it will sound as strange as the island dialect in the movie. This will of course be true for other languages.
At the same time, that doesn’t mean modern versions of languages will be lost to history, or that speakers of it won’t be able to talk with speakers of the 2321 dialects. Intelligent machines and perhaps other kinds of intelligent life forms we couldn’t even imagine today will dominate the planet in 2321, and they will also know all human languages, including archaic dialects like the English of 2021, and dead human languages like Ancient Greek, allowing them to communicate with however many of us there are left.
Humans will also easily overcome linguistic barriers thanks to vastly improved language translation machines. The brain implants I mentioned earlier could also let people share pure thoughts and emotions, obviating the need to resort to language for communication. Whatever the case, technology will let people communicate regardless of what their mother tongues were, so a person who only knew 2021 English could easily converse with one who only knew 2321 English.
The knowledge that this state of affairs is coming should assuage whatever fears anyone has about English (or any other language) becoming “bastardized,” “degenerating,” or going extinct. So long as dictionaries and records of how people spoke in this era survive long enough to be uploaded into the memory banks of the first A.I., our idiosyncratic take on the English language will endure forever and be forever reproducible.
Finally and on a side note, the intelligent machines of 2321 will probably communicate amongst themselves using languages of their own invention. Instead of having one language for everything, I suspect they’ll have a few languages, each optimally suited for a different thing (for example, there could be one alphabet and syntax structure that is used for mathematics, another for prose and poetry, and others for expressing other modes of thought), and that they will all speak them fluently. As intricate and expressive as today’s human languages are, they contain many inefficiencies and possibilities for improvement, and it’s inevitable that machines will apply information theory and linguistics to make something better.
Sea levels will have noticeably risen. In the 2144 segment, there’s a scene where two characters look out the “digital window” of unit in a high-rise apartment building and see a partly flooded cityscape. One of the characters says that the structures that are partly or fully underwater were part of Seoul, South Korea, and that the larger, newer buildings on dry land are part of “Neo-Seoul.” In spite of the distressed condition of such a large area, the metropolis overall is thriving and thrums with people, vehicle traffic, and other activity. I think this is an accurate depiction of how global warming will impact the world by 2144.
Let me be clear about my beliefs: Global warming is real, human industrial activity is causing part of it, sea levels are rising because of it, it will be bad for the environment and the human race overall, and it’s worth the money to take some action against it now. However, the media and most famous people who have spoken on the matter have grossly blown the problem out of proportion by only focusing on its worst-case outcomes, which has tragically misled many ordinary people into assuming that global warming will destroy civilization or even render the Earth uninhabitable unless we forsake all the comforts of life now. The most credible scientific estimates attach extremely low likelihoods to those scenarios. The likeliest outcome, and the one I believe will come to pass, is that the rate of increase in global temperatures will start significantly slowing in the second half of this century, leading to a stabilization and even a decline of global temperatures in the 22nd century.
The higher temperatures will raise sea levels by melting ice in the polar regions and by causing seawater to slightly expand in volume (as water warms, its density decreases), but the waterline in most coastal areas will only be 1/2 to 1 meter higher in 2100 than it was in 2000. That will be barely noticeable across the lifetimes of most people. Sea levels will have risen even more by 2144, inundating some low-lying areas of coastal cities, but people will adapt as they did in the film–by abandoning the places that became too flood-prone and moving to higher ground. Depending on the local topography, this could entail simply moving a few blocks away to a new apartment complex. Except maybe in the poorest cities, the empty buildings would be demolished as people left, so there wouldn’t be any old, ghostly structures jutting out of the water as there were in the future Seoul.
And instead of the ocean suddenly inundating low-lying swaths of town, forcing their abandonment all at once in the middle of the night, they would be depopulated over the course of decades, with individual buildings being demolished piecemeal once flood insurance costs hit a tipping point, or once that one particularly bad flood caused so much damage that the structure wasn’t worth repairing. Again, the broader changes to the metro area would happen so gradually that few would notice.
If we could jump ahead to 2144, we’d be able to see and feel the effects of global warming. Some parts of Seoul (and other cities) that were formerly on the waterfront would be underwater. However, as was the case in the film, we’d also see civilization had not only survived, but thrived, and that the expansion of technology, science and commerce had not halted due to the costs imposed by global warming. It would not have come close to destroying civilization, and people would realize that the worst was behind them.
Of course, that doesn’t mean the threat will have been removed forever. What I’ll call a “second wave” of global warming is possible even farther in the future than 2144. You see, even if we completely decarbonize the economy and stop releasing all greenhouse gases into the atmosphere, we humans will still be producing heat. Solar panels, wind turbines, hydroelectric dam turbines, nuclear fission plants, and even clean nuclear FUSION plants that will “use water as fuel” all emit waste heat as inevitable byproducts of generating electricity. Likewise, all of our machines that turn that use that electricity to do useful work, like a factory machine that manufactures reusable shopping bags or an electric car that drives people around town, also release waste heat. This is thermodynamically unavoidable.
The Earth naturally radiates heat into space, and so far, it has been able to radiate all the heat produced by our industrial activity as fast as we can emit it. However, if long-term global economic growth rates continue, in about 250 years we’ll pass the threshold, and our machines will be releasing so much waste heat that the Earth’s surface will start getting hotter. The second wave of global warming–driven by an entirely different mechanism than the first wave we’re now in–will start, and if left unaddressed, it will render the Earth uninhabitable by very roughly 400 years from now. Based on all these estimates, 2144 will probably be an interregnum between the two waves of global warming.
Even if we melted all the ice on Mars and released all the CO2 trapped in its rocks, the resulting atmosphere would only be 7% as thick as Earth’s. That’s not good enough for humans to breathe, or to raise surface temperatures above freezing. https://www.nasa.gov/press-release/goddard/2018/mars-terraforming
The Intergovernmental Panel on Climate Change (IPCC) thinks global warming “doomsday” scenarios are very unlikely. The rate of global warming will significantly drop in the second half of this century, and global temperatures will probably stabilize in the next century. https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter12_FINAL.pdf
This is the fourth…and LAST…entry in my series of blog posts analyzing the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. You can view the previous installments of this series here:
“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”
MOSTLY RIGHT
Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.
While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.
If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.
Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.
“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”
MOSTLY RIGHT
Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.
Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.
In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.
At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.
“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”
RIGHT
In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.
Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.
During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.
With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).
Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.
“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”
RIGHT
It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.
In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”
The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.
There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.
“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”
PARTLY RIGHT
As I said in part 2 of this review, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.
From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.
This means the prediction was partly right, but also of questionable significance in the first place.
“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”
MOSTLY RIGHT
In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.
In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)
Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:
Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).
Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.
Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:
And they have seen the last light fail; By day they kneel and pray; But, still they turn and gaze upon The face of God to-day.
And God is touched and weeps anew For the lost souls around; And sorrow turns their pale and blue, And comfort is not found.
They have not mourned in the world of men, But their hearts beat fast and sore, And their eyes are filled with grief again, And they cease to shed no tear.
And the old men stand at the bridge in tears, And the old men stand and groan, And the gaunt grey keepers by the cross And the spent men hold the crown.
And their eyes are filled with tears, And their staves are full of woe. And no light brings them any cheer, For the Lord of all is dead
In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.
“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”
UNCLEAR
It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?
Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)
If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.
“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”
WRONG
Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.
That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).
Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.
“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”
MOSTLY WRONG
Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.
In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.
Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.
“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.
Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.
Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.
“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”
UNCLEAR
What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.
What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.
“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).
With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.
If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.
If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.
The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.
Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.
“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”
PARTLY RIGHT
The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.
The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.
However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.
Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.
The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.
“The expected life span…[is now] over one hundred.”
WRONG
The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.
“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”
WRONG
Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.
Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.
As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.
Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.
“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”
RIGHT
The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.
Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.
“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”
MOSTLY RIGHT
Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.
By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.
The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).
A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.
Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.
Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.
“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”
RIGHT
In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:
Kurzweil: How old are you? Eugene: I’m a little boy. 13 years old.
Kurzweil: How much is two plus two? Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now? Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Kurzweil: I live in the capital of the United States. Do you know where that is? Eugene: Every nerd knows that the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that put a man on the moon. Eugene: Well, if you want it, the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that built the great wall. Eugene: Tell me more about Capital. I like to learn about different places!
In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.
“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”
RIGHT
The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:
Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.
Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.
“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”
RIGHT
Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.
Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.
Well, well, well…that’s it. I have finally come to the end of my project to review Ray Kurzweil’s predictions for 2019. This has been the longest single effort in the history of my blog, and I’m glad the next round of his predictions pertains to 2029, so I can have time to catch my breath. I would say the experience has been great, but like the whole year of 2020, I’m relieved to be able to turn the page and move on.
Happy New Year!
Links:
Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could. https://www.bbc.com/news/business-47700701
In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve. https://www.factcheck.org/2013/03/bloombergs-obesity-claim/
“Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”? https://en.wikipedia.org/wiki/Auto-Tune
The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks. https://www.ersj.eu/journal/1245
This is the third entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My previous entries on this subject can be found here:
“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”
PARTLY RIGHT
While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”
“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”
WRONG
As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.
Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.
“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”
MOSTLY WRONG
As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.
Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.
A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.
If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”
No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.
Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.
VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.
“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”
MOSTLY RIGHT
In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.
The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.
“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”
MOSTLY RIGHT
Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.
As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).
“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”
WRONG
Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.
“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”
WRONG
Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.
“Rapid economic expansion and prosperity has continued.”
PARTLY RIGHT
Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?
The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.
Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.
And this final chart shows global GDP growth over the same period.
Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.
At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.
While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.
“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”
WRONG
“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.
As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.
“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”
MOSTLY WRONG
The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.
“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”
PARTLY RIGHT
Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.
No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.
“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”
WRONG*
The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.
Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.
*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.
In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.
“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”
UNCLEAR (but probably WRONG)
The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?
What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?
But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.
At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.
“There are very few transportation accidents.”
WRONG
Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.
In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.
“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”
WRONG
As I noted in part 1 of this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.
Aaaaaand that’s it for now. I originally estimated this project to analyze all of Ray Kurzweil’s 2019 predictions could be spread out over three blog entries, but it has taken even more time and effort than I anticipated, and I need one more. Stay tuned, the fourth AND FINAL installment is coming soon!
Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations. https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/
A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible. https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf
The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device. https://www.cnet.com/reviews/oculus-go-review/