In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.
Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.
I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.
With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.
“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”
RIGHT
A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.
“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”
MOSTLY WRONG
The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).
Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.
However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.
So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.
The augmented reality glasses will also let you experience virtual reality.
WRONG
Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.
Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.
The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.
PARTLY RIGHT
The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.
The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).
Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.
The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.
It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.
And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.
“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”
MOSTLY RIGHT
Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.
By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.
Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.
Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”
Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.
The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.
“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “
MOSTLY WRONG
Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.
The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.
The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.
If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?
You see where this is going…
Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.
Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.
Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.
“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.
The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.
“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”
WRONG
The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.
My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.
“Typically, people do not own just one specific ‘personal computer’…”
RIGHT
A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).
The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.
“Computing and extremely high-bandwidth communication are embedded everywhere.”
MOSTLY RIGHT
This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?
First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”
Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere’” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.
Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.
Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.
In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.
Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”
“Cables have largely disappeared.”
MOSTLY RIGHT
Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.
That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.
On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.
No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.
And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.
This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.
“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]
WRONG
Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).
A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.
$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.
6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.
This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.
Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.
I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.
“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”
WRONG
Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.
Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.
Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.
However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.
Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!
Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.
What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.
3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.
Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.
“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”
WRONG
For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.
I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.
If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.
“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”
MOSTLY WRONG
Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.
In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.
Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.
It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.
“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”
UNCLEAR
Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)
Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.
Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.
“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”
PARTLY RIGHT
The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.
Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).
If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.
“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”
RIGHT
This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”
The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.
I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.
Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.
“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”
WRONG
Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.
Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.
Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.
This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”
“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”
WRONG
While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.
In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.
Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.
Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”
“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”
RIGHT
The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.
The smartphones of 2019 also meet Kurzweil’s criteria.
“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.
MOSTLY WRONG
A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)
The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.
Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.
Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.
“Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”
RIGHT
The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.
Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.
It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.
For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)
The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.
The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.
The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.
Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.
“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”
WRONG*
The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.
However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.
Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.
The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.
“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”
RIGHT
As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.
“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”
MOSTLY RIGHT
First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.
Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote earlier, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”
With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.
However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.
At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.
“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”
WRONG
This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.
In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.
If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.
“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”
PARTLY RIGHT
As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.
Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.
Apps that are made specifically for blind people are not in wide usage among sighted people.
“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”
MOSTLY RIGHT
Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”
The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.
“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”
MOSTLY WRONG
“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored earlier in this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.
However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.
Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.
In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.
“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”
RIGHT
There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.
That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.
There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.
“Cochlear and other implants for improving hearing are very effective and are widely used.”
RIGHT
Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.
Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.
“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”
WRONG
Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.
“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”
WRONG (sadly)
As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.
Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.
“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”
PARTLY RIGHT
While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”
“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”
WRONG
As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.
Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.
“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”
MOSTLY WRONG
As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.
Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.
A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.
If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”
No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.
Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.
VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.
“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”
MOSTLY RIGHT
In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.
The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.
“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”
MOSTLY RIGHT
Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.
As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).
“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”
WRONG
Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.
“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”
WRONG
Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.
“Rapid economic expansion and prosperity has continued.”
PARTLY RIGHT
Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?
The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.
Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.
And this final chart shows global GDP growth over the same period.
Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.
At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.
While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.
“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”
WRONG
“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.
As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.
“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”
MOSTLY WRONG
The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.
“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”
PARTLY RIGHT
Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.
No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.
“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”
WRONG*
The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.
Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.
*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.
In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.
“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”
UNCLEAR (but probably WRONG)
The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?
What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?
But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.
At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.
“There are very few transportation accidents.”
WRONG
Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.
In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.
“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”
WRONG
As I noted earlier in this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.
“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”
MOSTLY RIGHT
Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.
While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.
If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.
Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.
“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”
MOSTLY RIGHT
Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.
Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.
In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.
At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.
“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”
RIGHT
In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.
Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.
During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.
With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).
Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.
“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”
RIGHT
It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.
In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”
The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.
There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.
“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”
PARTLY RIGHT
As I wrote earlier, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.
From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.
This means the prediction was partly right, but also of questionable significance in the first place.
“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”
MOSTLY RIGHT
In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.
In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)
Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:
- “Bach style chorale Emmy David Cope”
https://youtu.be/PczDLl92vlc - “Mozart sonata 2 3”
https://youtu.be/tJ6lwZPLBlk - Chopin style Mazurka 4 Emmy Cope
https://youtu.be/DqNcnIkYM4s - Jopin style Rag Emmy David Cope
https://youtu.be/R-_9zSSQK3o
Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).
Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.
Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:
And they have seen the last light fail;
By day they kneel and pray;
But, still they turn and gaze upon
The face of God to-day.
And God is touched and weeps anew
For the lost souls around;
And sorrow turns their pale and blue,
And comfort is not found.
They have not mourned in the world of men,
But their hearts beat fast and sore,
And their eyes are filled with grief again,
And they cease to shed no tear.
And the old men stand at the bridge in tears,
And the old men stand and groan,
And the gaunt grey keepers by the cross
And the spent men hold the crown.
And their eyes are filled with tears,
And their staves are full of woe.
And no light brings them any cheer,
For the Lord of all is dead
In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.
“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”
UNCLEAR
It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?
Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)
If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.
“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”
WRONG
Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.
That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).
Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.
“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”
MOSTLY WRONG
Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.
In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.
Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.
“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.
Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.
Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.
“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”
UNCLEAR
What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.
What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.
“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).
With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.
If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.
If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.
The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.
Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.
“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”
PARTLY RIGHT
The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.
The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.
However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.
Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.
The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.
“The expected life span…[is now] over one hundred.”
WRONG
The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.
“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”
WRONG
Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.
Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.
As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.
Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.
“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”
RIGHT
The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.
Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.
“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”
MOSTLY RIGHT
Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.
By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.
The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).
A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.
Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.
Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.
“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”
RIGHT
In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:
Kurzweil: How old are you?
Eugene: I’m a little boy. 13 years old.
Kurzweil: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Kurzweil: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want it, the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!
In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.
“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”
RIGHT
The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:
- “Will Machines Ever Become Conscious?” Scientific American, December 1, 2019
- “What is consciousness, and could machines have it?” Science, October 27, 2017
- “The Myth of Sentient Machines” Psychology Today, June 1, 2016
Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.
Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.
“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”
RIGHT
Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.
Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.
Links:
- Ray Kurzweil’s self-analysis of how accurate his 2009 predictions were: (https://kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf)
- The inventor of the first augmented reality contact lenses predicted in 2015 that commercially viable versions of the devices wouldn’t exist for at least 20 more years.
(https://www.inverse.com/article/31034-augmented-reality-contact-lenses) - In late 2019, a Magic Leap One cost $2,300 – $3,300 and a Hololens was $3,000.
https://www.cnn.com/2019/12/10/tech/magic-leap-ar-for-companies/index.html - In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.
(https://www.theverge.com/2019/5/16/18625238/vr-virtual-reality-headsets-oculus-quest-valve-index-htc-vive-nintendo-labo-vr-2019) - In 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs. Keyboards aren’t dead.
(https://venturebeat.com/2019/01/10/gartner-and-idc-hp-and-lenovo-shipped-the-most-pcs-in-2018-but-total-numbers-fell/) - Survey data from 2018 about the global usage of “digital personal assistants.” Users speak to their smartphones or smart speakers, mostly to obtain simple information (like weather forecasts) or to have their computers do simple tasks. (https://www.business2community.com/infographics/the-growth-in-usage-of-virtual-digital-assistants-infographic-02056086)
- 2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones.
(https://www.pewresearch.org/internet/fact-sheet/mobile/) - A 2015 American Community Survey revealed that households headed by people over 64 were the least likely to have smartphones, PCs, or internet access.
(https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-37.pdf) - In 2000, 34% of Americans accessed the internet through dial-up modems, and only 3% did so through “broadband” (a catch-all for cable, DSL, and satellite access). Most U.S. internet users were still using dial-up modems that were at most 56k. The remaining 63% didn’t access it at all.
(http://thetechnews.com/2016/01/03/usa-getting-faster-internet-speeds-but-not-at-the-pace-others-are/) - In 2019, a mid-tier internet service plan in the U.S. granted users download speeds of 30 – 60 Mbps.
(https://www.pcmag.com/news/state-by-state-the-fastest-and-slowest-us-internet) - 2019 U.S. mobile phone network average speeds were 33.88 Mbps for downloads and 9.75 Mbps for uploads.
(https://www.speedtest.net/reports/united-states/ ) - The Black Friday 2019 circular for Newegg.com featured five models of printers for sale. Only one of them, the Brother HL-L2300D, wasn’t WiFi-capable.
(https://bestblackfriday.com/ads/newegg-black-friday/page-12#ad_view) - Gartner figures for global computer sales in 2015, 2016, 2017, 2018 and 2019.
(https://www.gartner.com/en/newsroom/press-releases/2017-01-11-gartner-says-2016-marked-fifth-consecutive-year-of-worldwide-pc-shipment-decline)
(https://venturebeat.com/2018/01/11/gartner-and-idc-agree-hp-shipped-the-most-pcs-in-2017/)
(https://www.gartner.com/en/newsroom/press-releases/2020-01-13-gartner-says-worldwide-pc-shipments-grew-2-point-3-percent-in-4q19-and-point-6-percent-for-the-year) - Intel’s i7 Generation 8 processor is capable of 361.3 gigaflop speeds. (https://www.pugetsystems.com/labs/hpc/Skylake-X-7800X-vs-Coffee-Lake-8700K-for-compute-AVX512-vs-AVX2-Linpack-benchmark-1068/)
- 3.2 billion people owned a smartphone in 2019.
(https://newzoo.com/insights/trend-reports/newzoo-global-mobile-market-report-2019-light-version/) - In 2019, 3D chips were common in memory storage devices, like MicroSD cards. 3D NAND chips had up to 64 layers.
(https://semiengineering.com/what-happened-to-nanoimprint-litho/) - In 2019, Intel was still working the kinks out of its first 3D computer processor, called “Lakefield,” and it wasn’t ready for commercial sales.
(https://www.overclock3d.net/news/cpu_mainboard/intel_details_their_lakefield_processor_design_and_foveros_3d_packaging_tech/1) - In 2019, computer circuits made of carbon nanotubules were still stuck in research labs, and held back from commercialization by many unsolved problems relating to cost of manufacture and reliability. Silicon was still the dominant computing substrate.
(https://www.sciencenews.org/article/chip-carbon-nanotubes-not-silicon-marks-computing-milestone) - “Compute cycle” has three meanings: #1 (https://www.zdnet.com/article/how-much-is-a-unit-of-cloud-computing/), #2 (https://www.quora.com/What-is-a-Compute-cycle) and #3 (https://www.computerhope.com/jargon/c/compute.htm)
- In a 2019 experiment, researchers were able to decode the words a person was speaking by studying their brain activity.
(https://www.biorxiv.org/content/10.1101/350124v2) - “The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.” –Marvin Minsky, 2013
(https://youtu.be/3PdxQbOvAlI) - “Today’s neural nets use algorithms that were essentially developed in the early 1980s.”
(https://futurism.com/cmu-brain-research-grant) - The inventor of “back-propagation,” which spawned many computer algorithms central to AI research, now believes it will never lead to true intelligence, and that the human brain doesn’t use it.
(https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html) - Henry Markram’s project to create a human brain simulation by 2019 failed.
(https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/) - “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.” –Yann LeCun, 2017
(https://www.theverge.com/2017/10/26/16552056/a-intelligence-terminator-facebook-yann-lecun-interview) - Machine neural networks are similar to human brains in key ways.
(https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414) - Some machine neural nets use genetic algorithms.
(https://blog.coast.ai/lets-evolve-a-neural-network-with-a-genetic-algorithm-code-included-8809bece164) - Quantum imaging is a real thing. However, devices that can make use of it are still experimental.
(https://onlinelibrary.wiley.com/doi/full/10.1002/lpor.201900097) - The Samsung Galaxy S10 is an upper-end smartphone released in 2019. It has three digital cameras, all of which operate on the same technology principles as the digital cameras of 1999.
(https://www.digitalcameraworld.com/reviews/samsung-galaxy-s10-camera-review) - The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines.
(https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines) - Dr. Marc Miskin’s micromachines from 2019 are interesting, but a far cry from what Kurzweil thought we’d have by then.
(https://www.inquirer.com/health/micro-robots-upenn-cornell-20190307.html) - There were less than 1 million augmented reality glasses in the world at the end of 2019.
https://arinsider.co/2019/09/11/5-million-ar-headsets-by-2023/ - Sales of print books in 2017 were not much different from what they probably were in 1999, when the Age of Spiritual Machines was published.
(https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/75735-sales-of-print-books-increased-slightly-in-2017.html) - Sales figures for “graphic paper” prove that, while paper books, newspapers, and office documents are declining, they aren’t “dead” or even “uncommon” yet.
(https://www.mckinsey.com/industries/paper-forest-products-and-packaging/our-insights/graphic-paper-producers-boosting-resilience-amid-the-covid-19-crisis) - The “Internet Archive” has scans of 3.8 million books, and is growing.
(https://www.pcmag.com/news/the-internet-archive-is-linking-digital-books-to-wikipedia-citations) - By late 2019, the U.S. National Archives had put 92 million pages of government documents on its website, free for anyone to view.
(https://narations.blogs.archives.gov/2019/10/02/naras-record-group-explorer-a-new-path-into-naras-holdings/) - The 2020 report COVID-19 on Campus found that most U.S. college students found online instruction an inferior way to learn compared to traditional classroom instruction.
(https://marketplace.collegepulse.com/img/covid19oncampus_ckf_cp_final.pdf) - Another 2020 survey of U.S. teenagers found that most of them considered online learning to be less effective than in-person classes.
(https://www.surveymonkey.com/curiosity/common-sense-media-school-reopening/) - A 2020 survey of U.S. teachers and school administrators found that student absenteeism rates climbed thanks to the introduction of online classes.
(https://www.edweek.org/ew/articles/2020/10/15/in-person-learning-expands-student-absences-up-teachers.html) - A U.S. Census survey found in 2019 that 17% of students didn’t have computers in their homes and 18% had no internet access or very slow service.
(https://apnews.com/article/7f263b8f7d3a43d6be014f860d5e4132) - The “Seeing AI” smartphone app uses the device’s camera to recognize text, objects and people and to read, describe, or name them out loud. Blind users have highly reviewed it.
(https://apps.apple.com/us/app/seeing-ai/id999062298#see-all/reviews) - The “BlindSquare” smartphone app provides voice-based GPS navigation to users, and is also highly reviewed by blind people.
(https://apps.apple.com/us/app/blindsquare/id500557255#see-all/reviews) - The FDA approves the “Argus II” retinal implant system for the blind in 2013.
(https://www.nature.com/news/fda-approves-first-retinal-implant-1.12439) - In 2019, an app called “Zoi Meet” was developed for the Vuzix Blade AR glasses. The app produces real-time subtitles of spoken words, displayed across the wearer’s field of vision.
(https://www.vuzix.com/Blog/vuzix-blade-real-time-language-transcription-zoi-meet) - In 2019, there were many smartphone apps that helped deaf people to communicate with hearing people.
(https://www.meriahnichols.com/best-deaf-apps/)
(https://abilitynet.org.uk/news-blogs/9-useful-apps-people-who-are-deaf-or-have-hearing-loss) - “Glide” is a popular video phone app among deaf people.
(https://www.fastcompany.com/3054050/how-video-chat-app-glide-got-deaf-people-talking) - “BW Dance” is an app that converts songs into patterns of vibrations that flashing lights that deaf people can experience.
(https://www.producthunt.com/posts/bw-dance) - “Not Impossible Labs” makes body suits that allow deaf people to experience music in the form of complex patterns of vibrations.
(https://www.billboard.com/articles/news/8476553/not-impossible-labs-live-music-deaf) - Cochlear implants have gotten better and more common among deaf people as time has passed.
(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111484/) - U.S. sales growth of cochlear implants is projected to continue.
(https://www.grandviewresearch.com/industry-analysis/cochlear-implants-industry) - Aside from cochlear implants, middle ear implants, auditory brainstem implants, and bone-anchored hearing aids can amplify or restore hearing.
(https://www.bcig.org.uk/cochlear-implant-devices/implantable-devices/) - People who are blind, or deaf, or who have serious spinal cord damage are less likely to have jobs and also make less money than people who don’t have those conditions.
(https://www.afb.org/research-and-initiatives/employment/reviewing-disability-employment-research-people-blind-visually)
(https://www.nationaldeafcenter.org/news/employment-report-shows-strong-labor-market-passing-deaf-americans)
(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792457/) - A 2018 survey found that most American adults spent an average of 24-41 minutes per day on phone calls. The survey didn’t break that number out into traditional voice-only calls and video calls.
(https://www.zdnet.com/article/americans-spend-far-more-time-on-their-smartphones-than-they-think/) - Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations.
(https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/) - A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible.
(https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf) - The “Varjo VR-1” virtual reality goggles cost $6,000 and can display lifelike images at the centers of their screens.
(https://www.cnet.com/news/the-best-vr-display-ive-ever-seen-varjo-vr-1-costs-6000/) - A roundup of the top ten speech-to-speech language translation apps of 2019.
(https://www.daytranslations.com/blog/top-10-free-language-translation-apps/) - A 2018 study found that the best English-Mandarin machine translation programs were inferior to professional human translators.
(https://www.technologyreview.com/2018/09/05/140487/human-translators-are-still-on-top-for-now/) - The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device.
(https://www.cnet.com/reviews/oculus-go-review/) - As this 2019 article makes clear, virtual haptic technology is far less advanced than Kurzweil predicted it would be.
(https://www.scientificamerican.com/article/new-virtual-reality-interface-enables-touch-across-long-distances/) - An account of a firsthand experience with cutting-edge (no pun intended) teledildonics in 2018:
(https://www.engadget.com/2018-07-02-flirt4free-teledildonics-long-distance-sex.html) - A 2019 analysis shows that the vast majority of transactions in the U.S. are still done face-to-face between humans, but e-commerce’s share is steadily growing.
(https://www.digitalcommerce360.com/article/us-ecommerce-sales/) - A roundup of the highest-rated robot vacuum cleaners of 2019:
(https://www.techhive.com/article/3388038/best-robot-vacuums-on-amazon.html) - A list of advanced car safety features from 2019:
(https://www.caranddriver.com/features/g27612164/car-safety-features/) - Tesla Autopilot is capable of Level 3 autonomous driving. However, out of an abundance of caution (e.g. – just one accident generates enormous bad publicity), the company has installed features that cap it at Level 2.
(https://electrek.co/2019/09/19/tesla-autopilot-v10-commute-without-driver-intervention/) - French inventor Franky Zapata designed a flying skateboard called the “Flyboard Air,” and used it to cross the English Channel and wow crowds during the 2019 Bastille Day military parade.
(https://www.theverge.com/2019/8/4/20753648/jet-powered-hoverboard-english-channel-crossing-franky-zapata-success) - These World Health Organization reports show that deadly road accidents were about as common in 2016 as they were in 2000. It’s still a leading cause of death.
(https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death)
(https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1) - The CDC reported that 43,024 people died in the U.S. in 2017 of “Transport accidents.” Only 1,718 of those did not involve road vehicles.
(https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09_tables-508.pdf) - Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could.
(https://www.bbc.com/news/business-47700701) - Andrew Yang made technological unemployment and universal basic income (UBI) major components of his 2020 U.S. Presidential campaign platform.
(https://en.wikipedia.org/wiki/Andrew_Yang#2020_presidential_campaign) - An article explaining “acoustic gunshot detection”:
(https://www.eff.org/pages/gunshot-detection) - The “ShotSpotter” gunshot detection system was emplaced in over 100 cities in 2019.
(https://www.startribune.com/as-gunfire-continues-in-st-paul-so-does-shotspotter-debate/565382652/) - This 2019 article from Dayton shows a correlation between the presence of license plate readers and a decrease in violent crime.
(https://www.daytondailynews.com/news/area-police-look-to-license-plates-readers-as-crime-fighting-tool/ESQLILHQP5HJTCIVJL6IJ6T7VU/) - In 2018, a wanted criminal was arrested in China after facial recognition cameras identified him at a concert, out of a crowd of 60,000 people.
(https://www.bbc.com/news/world-asia-china-43751276) - Edward Snowden’s key revelations about electronic spying.
(https://mashable.com/2014/06/05/edward-snowden-revelations/) - An incomplete list of data hacks that happened in the 2010s. Hundreds of millions of people had important personal data compromised.
(https://www.cnn.com/2019/07/30/tech/biggest-hacks-in-history/index.html) - A list of commonly used encrypted messaging apps in 2019. (https://heimdalsecurity.com/blog/the-best-encrypted-messaging-apps/)
- In 2018, VPNs were widely used on every continent. Forty-four percent of Indonesian internet users had them.
(https://blog.globalwebindex.com/chart-of-the-day/vpn-usage-2018/) - If obesity rates are any indication, people in the 2010s were not too poor to feed themselves.
(https://academic.oup.com/eurpub/article/23/3/464/536242) - In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve.
(https://www.factcheck.org/2013/03/bloombergs-obesity-claim/) - Several studies done during the 2010s indicated that governments would save money if they gave the homeless free apartments.
(https://www.vox.com/2014/5/30/5764096/homeless-shelter-housing-help-solutions) - A 2016 article about Google’s “Deep Dream” program, which can make surreal, artistic images.
(https://www.theguardian.com/artanddesign/2016/mar/28/google-deep-dream-art) - A computer-generated painting, “Portrait of Edmond de Belamy,” sold for $423,500 in 2018. Have YOU ever made a painting worth that much money?
(https://edition.cnn.com/style/article/obvious-ai-art-christies-auction-smart-creativity/index.html) - “Obvious” is a “collective” of humans and computers that produce acclaimed art.
(https://obvious-art.com/page-about-obvious/) - “EMMY” is a machine that can write decent instrumental songs.
(https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/) - Google’s “Open JukeBox” could even write songs that had simulated human voices singing.
(https://openai.com/blog/jukebox/) - Samples of GPT-2’s poetry.
(https://www.gwern.net/GPT-2) - Samples of GPT-2’s short news articles and written responses to prompts.
(https://openai.com/blog/better-language-models/) - “Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”?
(https://en.wikipedia.org/wiki/Auto-Tune) - The virtual reality gaming industry had about $1.2 billion in revenues in 2019.
(https://www.juniperresearch.com/press/press-releases/virtual-reality-games-revenues-reach-8-bn-2023) - In 2017, terrorists killed 14,300 people globally.
(https://www.jewishvirtuallibrary.org/statistics-on-incidents-of-terrorism-worldwide) - The U.S. spent $16.6 billion on cybersecurity in FY2019.
(https://www.fedscoop.com/cybersecurity-budget-2020-trump-white-house/) - The U.S. military’s “base” defense budget was $726.2 billion in FY2019.
(https://fas.org/sgp/crs/natsec/R44519.pdf) - The U.S. spent $33.6 billion on its nuclear forces in FY2019.
(https://www.cbo.gov/system/files/2019-01/54914-NuclearForces.pdf) - The “Phantom X1” ultralight plane.
(https://en.wikipedia.org/wiki/Phantom_X1) - Data for several “tiny” flying drones in use with the U.S. Navy in 2019.
(https://www.navy.mil/DesktopModules/ArticleCS/Print.aspx?PortalId=1&ModuleId=724&Article=2159299) - Data on the U.S. Army’s unmanned drones, including “tiny” ones, from the same period.
(https://fas.org/irp/program/collect/uas-army.pdf) - In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles.
(https://www.csis.org/analysis/us-military-forces-fy-2020-air-force) - We recently discovered how a mutation in the “SF3B1” gene changes intracelluar activity in ways that raise cancer risk.
(https://www.fredhutch.org/en/news/center-news/2019/10/sf3b1-cancer-mutation.html) - The Human Genome Project led to major cost improvements to gene sequencing technology, and to the discovery of many disease-associated genes.
(https://unlockinglifescode.org/learn/human-genome-project) - We have a better understanding of how cell-level molecular machinery contributes to aging.
(https://pure.au.dk/ws/files/52135662/DemirovicRattanExpGer13.pdf) - Official 2018 life expectancy figures for the U.S. and Japan:
(https://www.cdc.gov/nchs/products/databriefs/db355.htm)
(https://www.nippon.com/en/features/h00250/life-expectancy-for-japanese-men-and-women-at-new-record-high.html) - The 2019 Worldwide Threat Assessment barely mentions biological weapons.
(https://www.dni.gov/files/ODNI/documents/2019-ATA-SFR—SSCI.pdf) - Pfizer’s COVID-19 vaccine is the first to incorporate mRNA. The new technology could lead to other vaccines that save millions of lives.
(https://www.wfaa.com/article/news/health/coronavirus/vaccine/what-is-an-mrna-covid-19-vaccine-and-how-does-it-differ-from-other-vaccines/287-240b8181-f13f-47a4-9514-9b6b30988d32)
(http://www.rationaloptimist.com/blog/mrna-vaccines-could-revolutionise-medicine/) - Several smart watches available in 2019 had ECG monitors.
(https://www.reviewsbreak.com/best-ecg-smartwatch/)
(https://www.theverge.com/2018/9/13/17855006/apple-watch-series-4-ekg-fda-approved-vs-cleared-meaning-safe) - In 2019, Apple Watches with ECG monitors detected atrial fibrillation events in almost 2,000 people.
(https://news.trust.org/item/20190316134851-5cktc/) - The Apple Watch’s “hard fall” detection feature might have already saved the lives of several injured people.
(https://www.nbcnews.com/news/us-news/apple-watch-s-hard-fall-feature-automatically-calls-911-hiker-n1070471) - The “HeartGuide” smart watch can monitor blood pressure.
(https://www.medtechdive.com/news/fda-cleared-wearable-blood-pressure-device-hits-market/544908/) - The media wrongly declared in 2014 the “Eugene Goostman” had passed the Turing Test.
(https://www.bbc.com/news/technology-27762088)
(https://www.kurzweilai.net/mt-notes-on-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test) - Google’s “Duplex” AI could masquerade as human for short conversations.
(https://digital.hbs.edu/platform-rctom/submission/google-duplex-does-it-pass-the-turing-test/) - The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks.
(https://www.ersj.eu/journal/1245) - Facebook’s image recognition feature relied on a massive training set of data prepared by humans.
(https://engineering.fb.com/2018/05/02/ml-applications/advancing-state-of-the-art-image-recognition-with-deep-learning-on-hashtags/)
Another setback for Kurzweil’s prediction about the rise of 3D computer processors is Intel’s recent decision to cancel sales of its Lakefield processors.
https://www.extremetech.com/computing/324435-intel-eols-lakefield-its-first-x86-hybrid-cpu