This is the third entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My previous entries on this subject can be found here:
“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”
PARTLY RIGHT
While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”
“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”
WRONG
As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.
Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.
“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”
MOSTLY WRONG
As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.
Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.
A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.
If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”
No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.
Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.
VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.
“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”
MOSTLY RIGHT
In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.
The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.
“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”
MOSTLY RIGHT
Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.
As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).
“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”
WRONG
Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.
“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”
WRONG
Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.
“Rapid economic expansion and prosperity has continued.”
PARTLY RIGHT
Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?
The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.
Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.
And this final chart shows global GDP growth over the same period.
Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.
At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.
While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.
“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”
WRONG
“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.
As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.
“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”
MOSTLY WRONG
The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.
“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”
PARTLY RIGHT
Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.
No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.
“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”
WRONG*
The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.
Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.
*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.
In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.
“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”
UNCLEAR (but probably WRONG)
The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?
What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?
But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.
At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.
“There are very few transportation accidents.”
WRONG
Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.
In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.
“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”
WRONG
As I noted in part 1 of this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.
Aaaaaand that’s it for now. I originally estimated this project to analyze all of Ray Kurzweil’s 2019 predictions could be spread out over three blog entries, but it has taken even more time and effort than I anticipated, and I need one more. Stay tuned, the fourth AND FINAL installment is coming soon!
Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations. https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/
A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible. https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf
The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device. https://www.cnet.com/reviews/oculus-go-review/
‘”I don’t think Britain could have won the Falklands conflict without GCHQ,” Prof Ferris told the BBC. He said because GCHQ was able to intercept and break Argentine messages, British commanders were able to know within hours what orders were being given to their opponents, which offered a major advantage in the battle at sea and in retaking the islands.’ https://www.bbc.com/news/uk-54604895
‘Second, the violence in Ladakh has also allowed Beijing to examine the degree of coordination that exists within the Indo-US strategic partnership. As Indian and Chinese soldiers clashed with medieval-style weapons in the Galwan Valley, Beijing paid close attention to how the United States reacted.’ https://www.9dashline.com/article/india-china-rivalry-towards-a-two-front-war-in-the-himalayas
Warships need near-constant maintenance to stay at sea. Keeping the hull from rusting is an ongoing task, along with watching out for and fixing small leaks inside the ship. This means that, even on 100% automated ships, there will need to be mobile robots that can climb all over the outsides and inside spaces to scrub, paint, and dry surfaces. They would also probably have roles doing repairs caused by combat or by accidents. A big difference between “robot crewman” and humans is that the former won’t need much in the way of self-support infrastructure inside the ship: there won’t need to be bathrooms, kitchens, laundries, rec rooms, bunks, mail rooms, etc. The robots would probably spend all their time at their posts, like you spending your whole life at your work desk, never needing to sleep. This means automated ships could be smaller, simpler, and cheaper than manned ships without sacrificing any firepower, speed, or other capabilities. And in spite of considerable design differences, automated ships would still have internal spaces like rooms and hallways. If you went inside, you’d see robots of some kind moving around, doing tasks. https://www.thedrive.com/the-war-zone/37094/check-out-how-rusty-and-battered-uss-stout-looks-after-spending-a-record-215-days-at-sea
‘The μINS is the world’s smallest sensor module of its kind—approximately the size of 3 stacked US dimes. It provides high-quality direction, position, and velocity data for multiple applications by intelligently fusing sensor data from GPS (GNSS), gyros, accelerometers, magnetometers, and a barometric pressure sensor.’ https://insideunmannedsystems.com/worlds-smallest-better-gps-inertial-navigation-system-now-available/
In India, a couple gave birth to a boy who had a fatal genetic defect involving his blood. After learning that a bone marrow transplant could permanently cure him, the couple used IVF to create a second child that would be genetically similar enough to the son to serve as a marrow donor. They didn’t want to have the new child for any reason other than to save the first. They gestated the new child–a daughter–and transplanted some of her bone marrow, curing the son. Additionally, to ensure the daughter didn’t carry the same bone marrow defect that the son had, the couple did genetic testing on her while she was still an embryo. This technique, called “preimplantation genetic diagnosis,” is only one step down from genetic engineering. The ethics of this case are indeed questionable. https://www.bbc.com/news/world-asia-india-54658007
‘The successful cloning of DNA collected 40 years is meant to introduce key genetic diversity into the species that could benefit its survival. The zoo said the cloned Przewalski’s horse will eventually be transferred to the San Diego Zoo Safari Park and integrated into a herd of other Przewalski’s horses for breeding.’ https://time.com/5886467/clone-endangered-przewalskis-horse-zoo/
Because of the twisted ways in which our cells develop at the embryonic stage, the average person’s facial features are slightly shifted to the left side of his head. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6557252/
A computer simulation suggests that geographical differences caused the rise of many ethnicities and small countries in Europe, while a single ethnic group and country grew to encompass the vast area today known as China. Mountains, peninsulas, islands, and deserts are barriers to human movement and settlement. https://www.youtube.com/watch?v=bbGOXnElJeU
On the set of the sci-fi show The Mandalorian, the sets have replaced green screens with gigantic wraparound TV screens that display high-def footage. The footage of them being manipulated by special effects crewmen is trippy. (I’ve predicted devices like this will become common in U.S. households in the 2030s) https://www.youtube.com/watch?v=Ufp8weYYDE8&feature=emb_title
My prediction: ‘[By 2030] “Deepfake” pornography will reach new levels of sophistication and perversion as it becomes possible to seamlessly graft the heads of real people onto still photos and videos of nude bodies that closely match the physiques of the actual people. New technology for doing this will let amateurs make high-quality deepfakes, meaning any person could be targeted. It will even become possible to wear AR glasses that interpolate nude, virtual bodies over the bodies real people in the wearer’s field of view to provide a sort of fake “X-ray-vision.”’ https://www.militantfuturist.com/my-future-predictions-2020-iteration/
Disney made a wonderful, horrifying android that has human-like eye movements and gazes. (To be fair to Disney, human faces also look frightening without their skin.) https://www.youtube.com/watch?v=D8_VmWWRJgE
Three months ago, economist Robert Reich made this (totally failed) prediction: “Brace yourself. The wave of evictions and foreclosures in next 2 months will be unlike anything America has experienced since the Great Depression. And unless Congress extends extra unemployment benefits beyond July 31, we’re also going to have unparalleled hunger.” https://twitter.com/RBReich/status/1277641135368724483
This is the second entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My first entry on this subject can be found here.
“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”
RIGHT
The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.
The smartphones of 2019 also meet Kurzweil’s criteria.
“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.
MOSTLY WRONG
A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)
The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.
Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.
Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.
“Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”
RIGHT
The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.
Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.
It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.
For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)
The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.
The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.
The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.
Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.
“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”
WRONG*
The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.
However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.
Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.
The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.
“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”
RIGHT
As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.
“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”
MOSTLY RIGHT
First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.
Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote in the first part of this analysis, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”
With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.
However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.
At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.
“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”
WRONG
This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.
In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.
If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.
“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”
PARTLY RIGHT
As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.
Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.
Apps that are made specifically for blind people are not in wide usage among sighted people.
“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”
MOSTLY RIGHT
Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”
The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.
“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”
MOSTLY WRONG
“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored in part 1 of this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.
However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.
Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.
In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.
“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”
RIGHT
There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.
That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.
There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.
“Cochlear and other implants for improving hearing are very effective and are widely used.”
RIGHT
Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.
Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.
“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”
WRONG
Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.
“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”
WRONG (sadly)
As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.
Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.
Bird brains are radically different from mammalian brains, but produce similar levels of intelligent thought. Bird brains might actually be superior since they are made of smaller, more densely-packed neurons, meaning a bird would be smarter than a mammal whose brain had the same volume. Hundreds of years from now, “humans” might have denser brains and smarter minds thanks to radical genetic engineering that takes inspiration from other organisms. https://science.sciencemag.org/content/369/6511/1567
In 1991, Joe Biden predicted that “[By the year 2020] I’ll be dead and gone in all probability.” Three months remain in this year so… https://youtu.be/i4TuxvhoMs4
Using genetic engineering, scientists were able to transplant sperm from one male farm animal to a sterile male of the same species so that the recipient male produced the same sperm as the donor male. This could make it cheaper and easier to breed prized farm animals by using genetically inferior males as “surrogate fathers” for their offspring, and it could let us resurrect extinct species for which we have frozen sperm samples. https://www.pnas.org/content/117/39/24195
World-renowned scientist Stephen Wolfram gave a wide-ranging, four-hour interview. I set this up to play at what seemed like a particularly interesting moment, but you should watch it from the beginning. https://www.youtube.com/watch?v=-t1_ffaFXao&t=2862s
A recent experiment with an underwater server farm went well. Cooling costs were much lower because the capsule was immersed in cold seawater, and few of the servers failed because the atmospheric content in the capsule could be controlled better (a pure nitrogen atmosphere helped because oxygen corrodes computer circuits and cables). For this and other reasons, I think intelligent machines might live in the oceans. https://www.bbc.com/news/technology-54146718
Many common, manmade objects could be made more durable and longer-lasting, for relatively small up-front cost. However, this is rarely done since it goes against the interests of manufacturers, who want consumers to buy replacement goods often. Planned obsolescence is real and pervasive. It’s disturbing to think about how big a share of global economic activity is people buying replacements things that shouldn’t have needed to be thrown out. https://www.youtube.com/watch?v=zdh7_PA8GZU
The human backup driver was found criminally responsible for the infamous 2018 crash of a self-driving car that killed a homeless woman. https://www.bbc.com/news/technology-54175359
‘“Inertial navigation was perhaps the pinnacle of mechanical engineering and among the most complicated objects ever manufactured”…But in the 1990s these were superseded by micro-electromechanical systems (MEMS)—chips with vibrating mechanical structures that detect angular motion. MEMS technology is cheap and ubiquitous (it is used in car airbags and toy drones). That makes it hard to restrict by way of military-export controls.’ https://www.economist.com/science-and-technology/2020/01/16/irans-attack-on-iraq-shows-how-precise-missiles-have-become
And the worst “aircraft carriers” ever were the CAM Ships of WWII. The planes were violently catapulted/rocketed into the air, did their thing, and were then expected to crash land in the water next to a friendly ship, whereupon the pilot would be rescued. https://en.wikipedia.org/w/index.php?title=CAM_ship&oldid=961354276
The Congressional Budget Office predicts the pandemic’s human and economic impact will be felt for decades. Declining birthrates and higher mortality will lead to the U.S. population being 11 million people smaller in 2050 than it otherwise would have been. https://www.cbo.gov/publication/56598
In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.
Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.
I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.
With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem by breaking it into smaller, manageable chunks: My analysis of Kurzweil’s 2019 predictions from The Age of Spiritual Machines will be spread out over three blog entries, the first of which you’re now reading. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.
“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”
RIGHT
A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.
“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”
MOSTLY WRONG
The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).
Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.
However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.
So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.
The augmented reality glasses will also let you experience virtual reality.
WRONG
Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.
Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.
The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.
PARTLY RIGHT
The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.
The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).
Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.
The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.
It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.
And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.
“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”
MOSTLY RIGHT
Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.
By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.
Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.
Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”
Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.
The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.
“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “
MOSTLY WRONG
Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.
The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.
The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.
If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?
You see where this is going…
Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.
Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.
Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.
“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.
The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.
“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”
WRONG
The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.
My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.
“Typically, people do not own just one specific ‘personal computer’…”
RIGHT
A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).
The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.
“Computing and extremely high-bandwidth communication are embedded everywhere.”
MOSTLY RIGHT
This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?
First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”
Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere'” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.
Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.
Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.
In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.
Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”
“Cables have largely disappeared.”
MOSTLY RIGHT
Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.
That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.
On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.
No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.
And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.
This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.
“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]
WRONG
Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).
A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.
$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.
6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.
This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.
Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.
I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.
“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”
WRONG
Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.
Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.
Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.
However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.
Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!
Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.
What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.
3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.
Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.
“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”
WRONG
For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.
I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.
If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.
“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”
MOSTLY WRONG
Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.
In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.
Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.
It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.
“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”
UNCLEAR
Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)
Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.
Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.
“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”
PARTLY RIGHT
The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.
Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).
If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.
“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”
RIGHT
This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”
The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.
I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.
Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.
“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”
WRONG
Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.
Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.
Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.
This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”
“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”
WRONG
While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.
In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.
Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.
Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”
Whew! That’s it for now. I’ll try to publish PART 2 of this analysis next month. Until then, please share this blog entry with any friends who might be interested. And if you have any comments or recommendations about how I’ve done my analysis, feel free to comment.
2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones. (https://www.pewresearch.org/internet/fact-sheet/mobile/)
“The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.” –Marvin Minsky, 2013 (https://youtu.be/3PdxQbOvAlI)
The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines. (https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines)
Recently, I read an internet article titled “Driverless Hotel Rooms: The End of Uber, Airbnb and Human Landlords.” In it, the author describes a vacation scenario from the year 2025, in which it is possible to live in luxurious “driverless hotel rooms” that can be stored in “modular skyscrapers.” Once a person goes there, autonomous drone food deliveries and laundry pick-up services can meet their every need. And best of all, through the magic of the “peerism economy,” everything is dirt cheap.
I read the article about three times, and each time, I peeled back another layer of “jargon” and eventually saw that the author’s future scenario was actually conventional in some respects and totally unrealistic in others. Analyzing it was an interesting exercise that actually yielded a few new insights about the future for me. But before I go into that, read the article for yourself:
First, let’s consider the description of the “driverless hotel room” and think about what it really is:
‘[After your plane lands in Sydney and you exit the airport] You giggle, then follow the augmented directions leading to a sleek driverless hotel room. It’s about the size of a mini bus but without the seats, steering wheel and engine. ‘
This futuristic “driverless hotel room” sounds remarkably similar to…a recreational vehicle. Specifically, since it’s described as being the size of a ‘mini bus,’ it sounds like a “B-class” RV. An example is this 2018 Winnebago Travato:
It goes on: ‘Inside is everything you’d expected. On the left, a couch seat that folds into a queen-sized bed with the push of a button. To the right, a small kitchenette with electric stove, running water, sink, microwave and bar fridge. Behind that is the detachable bathroom module with toilet, shower and wash basin.’
If the placement of the furniture and appliances is rearranged, that describes a Winnebago Travato:
The Travato’s bathroom is not ‘detachable,’ but I can’t see how, aside from a small fuel efficiency penalty, that makes it any worse. In fact, the inconvenience of not always having a permanently attached bathroom that you can use without delay would probably make the article’s “driverless hotel room” worse than the 2018 Travato, but we’ll get to the “detachable” stuff in a minute.
For now, let’s stop and think about what the article has discussed so far. The futuristic-sounding “driverless hotel room” is actually just an RV, but with the self-driving capabilities we will surely have in 10 – 20 years. The concept is not exotic, nor does it take any ingenious insight to envision it: driverless cars are talked about in the news every day, and all the author of this article has done is taken the next step to say that RVs will someday have the technology, too. Hence, his use of “driverless hotel rooms” irks me because it makes the idea sound more abstruse than it really is. This sort of thing continues:
‘You look up at a lego-like modular skyscraper reaching high above the moonlit clouds. Your room docks with an electric skate and is elevated thirty stories up before slotting into a window-facing position.’
Translation: Your self-driving RV takes you to a multi-story parking garage and parks itself in one of the upper levels. Glimpse the future!
Anyway, the story continues. Since this is only the year 2025, you’re not a real Posthuman yet, so your feeble human body still has to ingest biomatter to survive.
‘”Hi there, welcome home. Hungry?” “I could go some pad thai and a beer thanks”, you respond. “That’ll be here in 6 minutes.” …Exactly 6 minutes later, a drone lands on the roof and lowers your order through a compartment in the ceiling. If you need to order any package you simply ask the room and a drone arrives; it even does laundry!’
Translation: Amazon quadcopter drones will transport small amounts of stuff to and from your temporary residence. While I’m sure that automated delivery of light cargo to your doorstep will be even faster, cheaper and more common in 2025 than now, I think we’ll be using self-driving cars for it, with small vehicles handling local deliveries of light loads like dinner and your laundry. Again, this is nothing exotic: we’d just have to replace pizza delivery guys with autonomous vehicles, and in one step, we’d be there. By describing this delivery service as being done by autonomous quadcopters instead of autonomous cars, the author again makes something conventional sound exotic.
However, this brings up an interesting side thought: If mobile delivery gets cheap enough thanks to autonomous vehicles (no wages for human drivers), then it might make financial sense for people to buy smaller and hence cheaper houses that lacked laundry rooms and kitchens, and to instead have their laundering and cooking needs handled by outside vendors. It’s already common for people in apartment buildings to not have private laundry machines, and they get by fine, so I think widespread use of automated laundry services is possible. However, I think its far less likely people will delete their kitchens (unless we’re discussing a distant future scenario where no one eats food since they’re Posthumans or robots) as it would actually be inconvenient to get every one of your meals delivered (faster and cheaper. Kitchens might get smaller, and only having a kitchenette will get more common, but people will not totally forsake their ability to refrigerate and cook food for themselves anytime soon.
‘One of the side panels opens smoothly to reveal a large adjoining living room module.
Extra modules are optional and can be requested ondemand: an extra bed, private gym, spa, snackbar, office and more. On various levels of the tower are cafes, restaurants, retail stores, entertainment areas, communal kitchens, laundromats, a gym and even a cinema.’
So far, this is the only truly original aspect of the hypothetical experience. It sounds like the author is saying that smaller, single-purpose autonomous RVs will, at your request, pull up next to your primary RV so that their sliding cargo doors align. Both of the doors will open, letting you step from one vehicle to another as if you were crossing a door threshold (would something like an accordion canopy extend around the two doors to keep out the cold air and bugs?). The idea of being able to customize a vacation home based on your needs is interesting, but why would this “modular” solution be cheaper or less of a hassle than renting a single RV that already had exactly what you needed inside? RVs already vary considerably in overall size, layout and features, so if you were on a working vacation, wouldn’t you just order an RV in the beginning that had an office desk and equipment? And why would you want to order a detachable private gym module when the scenario says the parking garage–er, lego-like modular skyscraper–has a built-in gym you can use?
In considering the value of this concept, let’s not make the mistake of giving it bonus points simply because it includes a bunch of technology. Instead, let’s ask a more essential question: How is this “future working vacation scenario” any better than a much lower-key scenario where you take a taxi to a normal hotel, and then pick among rooms of different sizes and luxury levels according to your needs and budget? Won’t the normal hotel also have all the same stuff–gyms, spas, snackbars, office space, cafes, restaurants, etc.–either inside of it or within walking distance of it? In touristy places and cities like Sydney, this is almost always the case. I don’t see how staying in the high-tech RV would be more pleasant or useful.
‘Luxury living at $30 per night.’
The author never explains why the price would be so low. On RVshare.com, I found an ad for a 2018 Travato rental, and it was $245 per night! Older, more beaten-up RVs in the same size range didn’t go for less than $139. The glorified parking garage would also surely charge RVs parked there a daily fee, just like today’s run-of-the-mill RV campgrounds, raising the overall cost even more. I don’t see how better technology or some kind of “peer economy” innovation could lower the cost point to anywhere near $30. The RVs themselves would still have to charge money for fuel, cleaning and sterilization services, insurance (even machines will get into accidents, and every 1,000th human tenant will somehow manage to burn the RV down), and taxes. I assure you that local and state governments will DEFINITELY cash in on this type of business if it ever gets common.
The parking garage might have low overhead if it uses robots instead of expensive human workers, but that still doesn’t get us to the $30 price point. Also, note that normal hotel buildings could also take advantage of the same automation technology to get rid of their human staff, which would keep them price-competitive with the autonomous RV/parking garage hotel setup. I’m not convinced the latter would be any cheaper than renting a nice hotel room that came with an office desk and chair.
‘Your six week experience will be personalized to your precise ondemand preferences including invites to local communities, events and interest networks.’
This is much more plausible, and might be the one aspect of this hypothetical future vacation scenario that yields the clearest benefit. It stands to reason that as AIs gain better understandings of individual human tastes, they’ll be able to design vacation itineraries tailored to each person. This would benefit humans by saving them time doing vacation planning, and it would save them time and money during the vacations by steering them clear of attractions they probably won’t like.
Also, the article makes me realize that self-driving technology will have some real benefits for the RV industry and for the hobby’s popularity since it will eliminate the worst part of the travel experience–hours of highway driving. If people could spend that time doing something relaxing like watching videos or talking with their family members, the lifestyle would get more attractive, and most importantly, people would have more fun. Deleting the steering wheel, console and dashboard would also free up space that could be used for something else.
With the description of the vehicle and the user experience done, the author moves on to explaining how the autonomous RV/hotel garage paradigm will arise by 2025.
‘The image above is a screenshot of the thousands of new, unsold cars sitting at a dock in a town named Sheerness in the United Kingdom. This is one of hundreds of locations where new cars sit empty and unused. And while auto manufacturers typically keep a 60 day supply, US manufacturers hit a record high of over 4 million unsold vehicles in their inventory in 2016.
The issue of overproduction is a common crisis in Capitalism where more goods are produced than there are customers to consume them.’
This is a misdiagnosis of the problem: Car sales are cyclic, meaning sales spike during good economic times, then taper off as everyone who wants a new car gets one, and then get still lower during bad economic times. A textbook example of this cycle played out over the last 12 years, as car sales cratered during the Great Recession because so many people were unemployed, had their pay cut, or became temporarily cautious about spending money on luxuries. As the recovery hit its stride, the pent-up demand for new vehicles was unleashed, and car sales spiked.
By 2016, most of the Americans who wanted to buy new cars had bought them, and sales dropped. Yes, car companies overestimated demand, leading to a temporary glut of unsold cars at the time the article was written (2017), but sales projections are seldom perfect, and the inventory was eventually sold off. Photos of huge parking lots full of unsold, new cars might make good fodder for doomsday articles about the economy, but in context, they mean little. This wasn’t a serious problem or a “crisis in Capitalism.”
‘However if ondemand driverless vehicles come to fruition then your $10 Uber ride suddenly becomes a sub-$1 ride anywhere in the city.’
Again, the author provides no justification for such a massive price drop. Yes, Uber rides will get cheaper once the cars drive themselves and you don’t have to pay human drivers, but fuel costs, maintenance costs, and depreciation will not change. And even if autonomous cars are safer than human drivers, they won’t be perfectly safe, so there will always be some car insurance cost. Messes and damages caused by human passengers will need to be routinely cleaned, and that will also cost money.
Based on the actual cost breakdowns of Uber and regular taxi cab fares, “ondemand driverless vehicles” should be about 50% cheaper, not 90%.
‘At that point the appeal of owning a car will diminish for most of the population, thus creating a massive oversupply of unwanted human-driven vehicles.’
This is another fake problem. In reality, the transition to autonomous cars will take over 30 years, during which time the fraction of the vehicle fleet that is autonomous will grow while the fraction that is human-driven will shrink. Since cars typically last 10-15 years until they’re totaled (by a road accident, other accident like flood or vandalism, or by a mechanical problem that is too expensive to fix), obsolete, human-driven cars will steadily attrit out of the fleet during that long transition period, so there will be no “massive oversupply of human-driven vehicles.” Any excess of human-driven vehicles that builds up in rich countries like the U.S. could also be dealt with by exporting them to poorer countries where they’re still in demand. The international secondhand clothing trade is an instructive example of what will happen.
‘Given the forecasts of 2 billion vehicles on the roads by 2040, and considering driverless vehicles need only be idle while recharging, we can roughly calculate that only 100 million ondemand driverless vehicles will be required to replace all 2 billion human-driven vehicles.’
This is another prediction of a massive (95%) reduction in something that the author inadequately explains, and that is probably wrong. But I’ll bet the author’s reasoning is this:
1) There will be 2 billion vehicles in the year 2040.
2) The typical vehicle is idle 95% of the time, meaning it is parked in a driveway 95% of the time and is only being driven 5% of the time.
3) Therefore, doing some simple multiplication, I calculate that if everyone shared vehicles and used them in “shifts,” we could cut the number of vehicles by 95%, and 100 million autonomous taxis could provide the same level of mobility as 2 billion privately owned cars! You can check my math!
It seems simple, but is fatally flawed by the fact that demand for cars isn’t evenly spread out over each 24 hour day: there are peaks in the mornings and evenings when 50% of the population is moving to or from work or school, all at once. Unless you want to paralyze your economy and put your population into a state of daily aggravation, you won’t allow your country’s vehicle fleet to shrink below whatever level is needed to satisfy that peak daily demand. I guarantee a 95% reduction to the U.S. vehicle fleet would cause massive, daily disruption, even if the vehicles routed themselves with maximum efficiency.
That said, I can imagine a significant shrinkage of the global vehicle fleet happening thanks to greater use of telework. If people don’t have to drive to work each day, then there is obviously less need for them to own cars. In the more distant future, if machines render human workers obsolete, then the need would decline further. And in the REALLY distant future, when we’re all brains floating in jars connected to The Matrix, no one will need to physical travel anywhere for anything. You’ll just use virtual reality to indulge in whatever experience or vacation you want, without leaving your jar.
In my Terminator review and my analysis of what a fully-automated tank would look like, I mentioned that human-sized, general-purpose robots that can do the same physical tasks as humans will not necessarily look like humans, or even have humanoid body layouts (i.e. – head, large torso, two arms, two legs). I’d like to explore that idea in greater depth, and to offer educated guesses about what such robots would look like.
First, bear in mind that there are already countless numbers of robots in the world–overwhelmingly in factories and controlled work settings–and almost none of them are humanoid. Instead, their body shapes are entirely dictated by their narrow functions. For example, a robot that welds the seams between two sheets of metal comprising part of a car’s frame will resemble a giant arm and will have a welding torch for a hand. Since it is meant for use in a car factory assembly line where unfinished car frames will be delivered to it via conveyor belt, the robot won’t need to move from that spot, and hence won’t need legs or wheels. And since the act of welding a seam isn’t that complicated, it won’t need a giant computer brain, meaning it won’t have a head. Likewise, a robot designed to move supplies like medicine and linens throughout a hospital will take the form of a large, hollow box with wheels.
Even as robots get cheaper and more advanced in the coming decades and take over more jobs, the vast majority of them will continue looking nothing like humans, and will be designed for specific and not general tasks. Fully-autonomous vehicles, for example, will count as “robots,” but will not resemble humans.
That said, I think “overspecialization” of robot designs will prove inefficient, and that there will be niches for general-purpose robots in many areas of the economy and ordinary life. Some of these general-purpose robots will be about the same sizes as humans, but they won’t look exactly like us. Consider that the humanoid body layout is inherently unstable since it is top-heavy and only has two legs to balance on. If we had millions of bipedal, human-sized robots walking around and intermixing with us in many uncontrolled environments, there would be constant problems with them falling over (or being pushed over) and injuring or killing people. Something like a 250 pound Terminator made of hard metal would be a lawsuit waiting to happen.
Off the bat, it’s clear that general purpose robots can’t be so heavy that, if one fell on you, you would be seriously hurt, and/or unable to push it off of your body. At the same time, it can’t be so light that it tips over when carrying everyday objects like full trashcans, or is even at risk of being toppled by wind gusts. Splitting the difference between the average weights of adult men and woman gives us a figure of 180 lbs, which I think is a good upper limit to how much the robots could weigh.
Also off the bat, it’s clear that the general purpose robots should have the lowest practical centers of gravity and need to have soft exteriors to cushion humans against collisions. A low-hanging fruit helps us solve the first requirement: delete the robot’s head. This might sound very weird, but if we’re unbound by the constraints of biology and are designing a robot from metal and plastic starting from a clean slate, it makes perfect sense.
Since robots won’t eat, drink, or breathe, they won’t need mouths, noses, or any associated anatomical features found in human heads and necks. And since signals from the robot’s sensory organs would travel to its “brain” at the speed of light, there would be no advantage to clustering the eyes, ears, and brain together to reduce lag (thanks to the slowness of human nerve impulses, it takes about 1/10 of a second for an image or sound that has been detected by the eyes or ears to reach the brain), meaning the CPU could be moved into the torso. Doing that would lower the robot’s center of gravity and give the CPU more physical protection than our skulls provide our brains. (Distributing mental functions among several computer cores in different parts of the torso and even limbs would probably be an ideal setup since it would further improve survivability.)
In place of a neck and head, there might be a telescoping, flexible “stalk” or “tentacle” with sensory organs (camera lens, microphone) at its tip. It could extend and shorten, and swivel in any direction. By default, it would probably be facing forward and raised to the same height as a typical human head so it could see the world from the same perspective as we. The top of its torso might only be 4′ 10″ off the ground, but the stalk would rise up another foot. The sci fi space film Saturn 3 had an evil robot named “Hector” that had a crude tentacle like this in place of a head.
The last safety requirement that I mentioned, the need to have soft exteriors to cushion humans against collisions, could be satisfied by making their outer casings from a spongy material like silicone. However, I think it would probably be cheaper and just as effective to give the robots hard outer casings, but have them wear tight-fitting, padded clothes. The general-purpose robots would know how to wash their clothes in standard laundry machines and would periodically do so. Also, if the padding were made of the plastic foam found in life jackets, it would keep the robots from sinking to the bottom if they, say, fell into a swimming pool while cleaning it, or fell off the side of a fishing boat where they were part of the crew.
The need to protect people from accidental injury will also mean that general purpose robots will be made no faster or stronger than average humans. These limitations would be very helpful to us in a “robot uprising” scenario, but they’d be just as beneficial preventing many kinds of small, mundane accidents that could hurt people. For example, if your robot isn’t stronger than you, it can’t accidentally crush your hand by applying too much pressure during a handshake. If it can’t move faster than a jog, it can’t ever build up enough speed and momentum to collide with you with fatal force.
With these safety requirements in mind, it should be clear why the general-purpose “NS-5” robots in the movie I, Robot was unrealistic. There was no reason to give those robots superhuman speed, strength, agility, and explosive movement. Moreover, they all had hard exoskeletons and walked around “nude,” making them collision hazards. (On a side note, I also thought it was unrealistic that a single company–“U.S. Robotics”–would have an apparent monopoly on the humanoid robot market, and that all humans would own the same kind of robot. In reality, there will be many companies making them in the future, and there will be many different robot models and variants that will look different from one another, just as there’s great diversity in how cars look today.)
Now that I’ve covered the safety issues general-purpose robots will have to be designed to address, let’s move on to exploring the other requirements that will affect how they will look. Since they’ll have to navigate human-built environments like houses and to fit into vehicles designed for us, they will need legs instead of wheels so they can climb steps, arms and hands for opening doors and using tools, and they will need to be skinny and short enough to fit through standard-sized doorways. The requirement for them to be able to sit in chairs and climb over obstacles like low fences and fallen tree trunks will mean the size proportions of their limbs and bodies won’t be able to stray too far from those of humans. They will need fingers that are as thin as ours to type on keyboards and push standard-sized buttons, but they might not have five fingers per hand (it will be interesting to see what the optimal number turns out to be).
It wouldn’t cost much more money to make the joints in the robots’ fingers and everywhere else double-jointed, and they’d gain useful dexterity from such a feature, so I think it would be so. Pivot joints in the arms and legs would also allow for 360 degrees of rotation, further bolstering utility. At first I thought the general purpose robots would have a second set of arms–for a total of six limbs–so they could be more able than humans, but then I realized how wasteful that would be since so few tasks require them. 99% of the time, the second set of arms would uselessly hang down off the robot’s body and be dead weight.
Then again, that 1% of the time when you do need the extra pair of hands to do something could warrant some kind of engineering compromise. The prehensile sensor stalks that stand-in for heads on our general-purpose robots could elongate and grasp onto things, acting like weak third hands (our mouths do the same, and can hold smell, light objects). Instead of, or in addition to that, the legs at the bottom of the robot could terminate in hands instead of feet like ours. Chimpanzees are like this, and many birds also have feet they use for grasping and walking. The setup would make it harder for the robots to run, and maybe less energy-efficient for them to walk, but we’ve already established we don’t want them to be able to run fast, and many of the tasks we’d use these robots for wouldn’t require large amounts of walking anyway (ex – robot butler in your house). Aside from giving them an extra pair of hands for those rare occasions when they need it, having hands as feet would let the robots pick things up from the ground, climb ladders more easily, and maintain better balance on uneven surfaces like roofs.
It almost goes without saying that the robots would be able to walk on all fours about as well as they could walk on two legs. If they weren’t carrying anything and were just going from one place to another, walking on all fours would be safest since that would minimize the risks of them losing balance and crushing someone or breaking something. This is again reminiscent of chimps, and I think the robots might use their “knuckles” when walking on all-fours to keep the palms of their hands clean and undamaged. And interestingly, in laying out this new requirement for optional quadrupedalism, the hypothetical general-purpose robot’s design has superficially converged with the real-life “Spot” robot, made by Boston Dynamics.
One thing I don’t like about Spot’s design is that its torso is a single, rigid piece. The general-purpose robots I’m envisioning–or at least the more advanced variants of it that will be fielded in the more distant future–will need segmented torsos that let them bend and lean a little in all directions. The flexibility of our spines lets us do this, helping us to quickly make small postural adjustments to balance on two feet. The robots might not need anything as elaborate as a human back made of 33 vertebrae, and, as with the number of fingers, it will be interesting to see what the optimal (or sufficient) number of torso segments turns out to be.
Having a flexible torso, four hands, and four, highly flexible limbs that could bend in more ways than we can would also let the general-purpose robots comfortably touch any part of their own bodies, enabling them to self-repair, which would be an invaluable feature. The swiveling sensor stalk plus tiny cameras built into other parts of its body like the hands and torso would also let it see every part of its own body (cameras built into the hands or fingers would also let it reach inside small, tight spaces and clearly see what is inside, letting it guide the appendage, unlike humans who must blindly feel around in such situations). Contrast this with us humans, who have a hard time touching and manipulating some parts of our bodies (like the spot between our shoulderblades) and who can’t see every part of our own bodies because we have only one set of eyes that are in a head with limited rotation.
On that note, having small cameras embedded throughout its body would also eliminate blind spots, which would improve safety since the robots wouldn’t be at risk of running into humans or objects because they were unseen. Whereas human vision is confined to a forward-facing cone, the general purpose robots would see in a 360-degree bubble. The tip of the head stalk might have the biggest and best camera, but losing it wouldn’t blind the robot.
Having “eyes” in the torso and on all four limbs, along with a distribution of its mind and power sources among multiple internal computers and batteries in each place, could enable such a robot to fix itself even if only one limb were operational and everything else were not. Again, this reminds me a bit of something I’ve seen in the animal kingdom, this time among certain insects and spiders. Because they have less-centralized nervous systems than we, their limbs will keep moving after being severed, and, if they are cut in half across the torso, both halves will continue moving and reacting to stimuli.
Additionally, while the robots wouldn’t need to breathe, they should have an ability to suck in, retain, and expel air. This would allow them to duplicate the human abilities to blow out candles or blow dust off of things, and to make our bodies buoyant for floating in water. Of course, the engineering solutions that will let them do this could be totally different from human anatomy’s solutions. A small hole at the tip of one finger could be used to suck in and expel air, and it could be connected to a long tube that would lead to air sacs throughout the robot’s body, perhaps in places not analogous to where lungs are in our bodies.
The robots would also need to be waterproof. This would save them from being expensively damaged or destroyed by something as simple as rain, and would let them periodically clean themselves off with soap and water. Even without sweat glands and shedding skin cells, robots would inevitably get dirty thanks to dust in the air, splatter from kitchen or bathroom chores, or even mold growth. Being able to use a regular shower or a bucket of water and a sponge to clean themselves would be a very important feature, in addition to their ability to clean their clothes.
Another crucial feature would be a built-in power cord that could plug into standard electrical outlets. It might be stored internally in a small, closed compartment, or might take the form of retractable prongs located in one of the hands or feet. I suspect that, rather than get in your way, general-purpose robots will be programmed to run around your house and do chores when you were away at work or school. That would also be safer since it would eliminate any risk of the robots hurting you by accident while they were working. You would come home each day to a clean house and see your robot motionless in its designated corner or closet, plugged into an electrical outlet to recharge.
I’ve already mentioned the robots would need to have cameras and microphones to duplicate the human senses of sight and hearing, but they would also need to duplicate our sense of smell and taste to a degree. Those two senses can provide valuable information about the presence of poisonous gases, smoke, or spoiled food ingredients, and there are situations where a robot would be grossly ill-equipped to respond properly if it lacked them. Our multipurpose robots would thus need air sampling devices and some type of fluid analysis capability. The same technology found in smoke detectors, carbon monoxide detectors, and military poison gas detectors could stand in for a sense of smell. To crudely duplicate our sense of taste, the robot might have something like a litmus strip dispenser and water nozzle built into one of its hands. It could spray water on objects and then touch them with a strip to “taste” them.
The fifth human sense, touch, would need to be duplicated by pressure and temperature sensors distributed throughout the general purpose robot’s body. This feature would be simple to implement.
In conclusion, I predict there will be a future niche for “human-equivalent” robots that are general-purpose, human-sized, and can do all of the physical work tasks that we can do. That said, those robots will look very different from us, as they won’t be bound by the rules of biology or by the genetic path dependence that locks us into our human body layout. I’ve gone into depth describing one type of general-purpose robot, which could be described as a “headless humanoid.” However, I think robots with other types of body layouts could also fill the niche, perhaps including “centaurs”, “big ants”, and “dogs with one arm on their backs.” Just as there are many types of vehicles on the roads today that fulfill the same roles, I am sure there will be many types of general-purpose robots. I simply don’t have the time to envision and describe what each one could be like.
General-purpose, human-sized robots will of course not be the only kinds of robots we’ll mix with on a daily basis in the future, and in fact, I think they will be outnumbered by other, specific-purpose robots whose forms reflect their specialized functions. Self-driving cars and autonomous lawnmowers are good examples.
Finally, the general-purpose, human-sized robots must not be confused with androids, which will look identical to humans. I think the general-purpose robots will be used for jobs that don’t require anything more than superficial interaction with humans, like scrubbing toilets, restocking store shelves, and fixing appliances. Androids would be built to provide companionship, and to do service-sector jobs where warm and personable service was expected. If your beautiful android spouse broke, then your grubby, headless, weird-looking robot servant would fix it.
Interesting facts about the Space Shuttle: -It was originally supposed to be a fully reusable, two-stage craft. That design would have been more expensive but probably better. -The notion that the Shuttle would be a cheaper way to launch cargo into orbit that traditional rockets was never supported by data. Politicians just made it up to sell the idea to the public. -The Soviet “Buran” craft was more advanced than the U.S. Shuttle, and fixed some of the latter’s known flaws. https://gizmodo.com/the-space-shuttle-was-a-beautiful-but-terrible-idea-1842732042
Interesting details about the V-22: -“Many of the challenges in developing and operating the V-22 are the result of designing a fairly large platform to operate within the confines of US Marine Corps amphibious ships. This caused several compromises, such as a smaller proprotor diameter, which increases the download and reduces the hover efficiency, and a shorter wing, which reduces the amount of lift and range.” -“These engineering lessons and the lack of shipboard size constraints enabled Bell to reduce the downwash from the rotors, design the rotors to tilt from horizontal to vertical without rotating the engines, and improve the reliability and availability of components. The V-22’s downwash, or high velocity air from the two tilting proprotors producing 22,680 kg of thrust to keep the aircraft aloft, can damage objects or injure people below. It also means the Osprey must burn more fuel to hover.” -“In addition, the V-22 required a rear-ramp exit to avoid hot-engine exhaust blasting onto ship decks and grassy landing zones. As the V-280’s engines do not rotate, this solves the hot engine exhaust issue, which can start brush fires, and means troops can ingress and egress via side doors.” https://www.janes.com/article/95609/forty-years-on-from-the-v-22-s-conception-bell-applies-engineering-lessons-learned-to-the-v-280
An interesting video about the downsides of upgrading tanks. Adding weight in the form of applique armor or a bigger gun can push the tank’s engine and suspension past their design limits, increasing the odds of a breakdown. Drilling holes through tank armor to run new wires to create mounting points for gadgets can also make the armor much weaker. https://www.youtube.com/watch?v=PvSpMtulunU
Almost half of the French aircraft carrier Charles de Gaulle’s crew got infected with COVID-19. The ship’s crowded conditions proved ideal for disease transmission. https://apnews.com/fd1996b64f4cc3aeaa92b352bb7f5cce
For the first time on record, and probably for the first time since the era of Mao’s Mickey Mouse Economics, the Chinese economy shrank. The pandemic was the obvious cause. https://www.bbc.com/news/business-52319936
“The process of globalization, powerful as it is, could be substantially slowed or even stopped. Short of a major global conflict, which we regard as improbable, another large-scale development that we believe could stop globalization would be a pandemic…”
That is probably the most chillingly prescient passage from Mapping the Global Future, a report written 16 years ago by experts working for the U.S. National Intelligence Council, describing coming developments in geopolitics, culture, technology, and the economy out to 2020. With the year in question having arrived, I thought it was worthwhile to review the accuracy of it’s predictions, and overall, I was impressed. Mapping the Global Future correctly identified most of the megatrends that shaped the world from 2004-20, (though it was somewhat less accurate forecasting the degrees to which those factors would change things):
No significant expansion or strengthening of liberal democracy. From 2004-20, for every Myanmar there was a Turkey, and the number of “real” democracies across the world didn’t significantly change. Contrast this to the 15 years preceding the report’s publication, in which communism fell in Europe and Central Asia, along with many dictatorships in Latin America and Africa. The report’s authors correctly gauged that conditions were not ripe for another wave of international democratization.
Solid growth of global economy. The report failed to predict the Great Recession, but so did all other experts. Nevertheless, report’s estimate that the 2020 gross world product (GWP) would be 80% larger than it was in 2000 was very close to being right: it actually rose by 74% (adjusted for inflation).
Massive growth in China, and to a lesser extent, India. This was not the hardest prediction to make, though it should be noted that a minority of foreign policy experts in 2004 thought China might fall apart by 2020, probably thanks to political problems. I think the extent to which China’s growth (economic, military, technological, average living standards) ended up surpassing India’s would have surprised the authors.
Little or no weakening of Islamic extremism and terrorism. At this moment, there is a relative lull in the level of violence, but just three years ago, ISIS was at its peak, and nothing is stopping an “ISIS-level” resurgence of Islamic violence (Africa is likeliest to be the next hotbed). While the U.S. has dodged a sequel to 9/11, the total number of people killed worldwide by Muslim fanatics might actually be higher now than it was in 2004. The conditions that gave rise to Islamic terrorism in 2004 still exist in large parts of the world. Finally, the report made the frighteningly accurate predictions that al Qaeda would be replaced by new terrorist groups (ISIS and Boko Haram), and that the formation of an Islamic caliphate spanning multiple countries was even possible.
Very low likelihood of war between the great powers. Russia, China, and the U.S. didn’t even come close to fighting. A lot of ink has been spilled since 2004 about accidents–like U.S. and Russian planes shooting each other down over Syria–spiraling into all-out war, but I think cooler heads would have prevailed.
Weakening of U.S. global supremacy. The report correctly predicted that the U.S. would still be the world’s strongest country overall in 2020, but the gap between it and its nearest competitors–chiefly China–would be narrower. It was also right to forecast the weakening of the U.S.-led international banking and trade system.
Backlash against globalization and concomitant rise of populism and nationalism. From the election of Donald Trump, to Brexit, to the breakdown of the Doha Free Trade talks and the Trans-Pacific Partnership, to near-constant angst over the erosion of the middle class due to outsourcing and illegal immigrant laborers, to the rise of chauvinist strongmen across the world, we see clear proof of these trends. The struggle between liberal globalists and conservative nationalists became THE cultural and political fissure during the 2004-20 time frame.
Major impact of internet on culture, self-identity, business, and other aspects of life. As the report predicted, the expansion of the internet to most of the human race has empower global movements like the Arab Spring, fragmented and upended the news media landscape, and facilitated the rise of more complex human identities and group loyalties that transcend national borders, making national governance and consensus-forming harder.
World vulnerability to pandemic. This isn’t explored in great detail, but the report makes it clear that the threat of a pandemic bad enough to halt globalization is real.
Of course, the report also had a few failed predictions and omissions, which are important to mention, but in my opinion, outweighed by what the report got right:
Didn’t foresee the Great Recession. I noted this before, and also how it had little effect on the report’s accuracy forecasting 2000-2020 global wealth growth. The report’s authors were also in good company, since no expert in 2004 predicted the Great Recession.
Didn’t foresee fracking. While the report doesn’t predict anything as calamitous as the world running out of oil by 2020, it says that oil prices could be significantly higher than they were in 2004 due to tighter supplies, leading to the usual fare of anxieties, political problems, and small-scale wars. Had fracking not been invented, this could well have been the case. Fracking has revolutionized the global energy landscape by boosting oil and natural gas supplies well beyond what almost all energy experts thought possible in 2004. More than anything, this failure should highlight the perils of trying to predict the future of the energy markets.
Didn’t foresee Venezuela’s near-implosion (could it still happen?). To be fair, Venezuela’s economy collapsed because their socialist government badly managed its oil industry after nationalizing it and because fracking then caused a sharp drop in world oil prices. The report’s experts couldn’t have foreseen how bad the mismanagement would get, and as noted, they also didn’t predict the rise of fracking.
Thought North Korea would “come to a head.” It’s unclear what the report’s authors were envisioning here (North Korea democratization? North Korea chaotic implosion? One Korea–possibly with the help of a superpower ally–annexing the other?), other than the status quo of a divided Korean peninsula with a hostile dictatorship in the North ending by 2020. That didn’t happen, and it’s crucial to remember that there’s a clear and now long-running pattern of “experts” making wrong predictions about this. (https://www.theatlantic.com/international/archive/2012/08/the-long-history-of-wrongly-predicting-north-koreas-collapse/260769/) It raises the possibility that North Korea could continue to endure for much longer than we expect, in spite of the reports of how brittle and strange the regime is and how desperate its citizens are.
Thought Taiwan would “come to a head.” The authors surely meant either a successful Taiwanese declaration of independence or annexation to China (probably by force). This also didn’t happen, and can also be added to the long list of wrong predictions about this issue.
Russia predictions were not great, not terrible. While the report’s authors correctly predicted that corruption, lack of foreign investment, population shrinkage, conflicts with its neighbors would leave Russia “stuck in neutral” in terms of absolute power and declining in terms of relative global stature, they didn’t predict how badly relations would deteriorate with the West, and foresaw Central Asia as Russia’s likeliest battleground when in fact it was Ukraine and the Caucuses. My guess is that they underestimated how skillful of a leader Putin would turn out to be, and also underestimated the Russians’ resolve to not let any more of their satellite states slip away to the Western camp.
Overestimated the risks of bioterrorism and nuclear terrorism. Contrary to the report’s fears, no terrorists have used, or to our knowledge obtained, biological or nuclear weapons since 2004. Overestimating the threat is understandable given the contemporaneous problem of loose Russian nuclear weapons and widespread fear of and misinformation about bioterrorism following the 2001 Anthrax Attacks. Russia’s recovery from the chaotic 1990s allowed them to secure all of their nuclear weapons, and biological weapons are actually much harder to create and successfully use than popular fiction and biased “experts” who got most of the attention around 2004 led the public to believe. (Note: Unfortunately, I think weaponized COVID-19 could make bioterrorism much likelier)
Thinking about what the expert authors of Mapping the Global Future got right and wrong leads me to following general conclusions about the course of world events, and about making predictions:
The status quo is strong. Slow, plodding megatrends and entrenched systems are very resistant to change, regardless of how outdated, suboptimal, or undesirable they may be. The fact that hand-wringing and doomsaying about issues like the divided Korean peninsula, contested status of Taiwan, unsustainable European welfare states, American global primacy, and nation-state model has been going on for decades without resolution should give us pause whenever we hear someone predict a shift in some paradigm. The “inevitability” of another American Civil War is a good example. The stodgy status quo is probably stronger and more resilient to shocks than you think, can ruthlessly destroy upstarts, and might be able to use little reforms to muddle its way through some problem that was widely believed to be unsolvable and fatal.
Some dictatorships are smart. Though the report was upbeat about China’s prospects, if anything, it underestimated how strong the country and its regime would become by 2020. China has of course averted collapse, and its communist government has skillfully suppressed democracy and ethnic minority discontent. In short, the dictatorship proved smarter and more competent than even most experts thought in 2004. The use of technology for mass surveillance will entrench it even more in the future. The report’s authors would also have been surprised at how nimble and strong of a leader Putin proved to be, and how well he’s played his country’s diminished hand on the world stage.
Not everyone is ready for democracy. The report correctly recognized that conditions were not right for significant expansions of liberal democracy from 2004-20. The disappointing results of the democratization experiments the U.S. ran in Afghanistan and Iraq, the failure of the Arab Spring, and the rise–with majority voter support–of populist strongmen across the world have been valuable, if painful, reminders that not every group of people is ready for or wants liberal democracy. Growing political dysfunction in the U.S. is also damaging the brand.
Rational actors are in charge and they suck the fun out of everything. The hard truth is that every major country, including the U.S., China, Russia, and even North Korea, is led by a rational actor–or, more accurately, by groups of people who cancel out each other’s worst ideas so that the resulting consensus decisions are adequately rational and informed. They all have an accurate grasp of the world and of their own interests, and base their key decisions on cost-benefit calculations, which is why North Korea doesn’t invade the South, China doesn’t invade Taiwan, the U.S. and Russia don’t start WWIII, etc.
Expert views are good, and usually better than non-experts, but never perfect. As I wrote earlier, I was impressed with the overall accuracy of the report’s predictions, and think the things they got right in aggregate outweigh the things they got wrong. The report’s accuracy probably owes mostly to the fact that it solicited views from “25 leading outside experts from a wide variety of disciplines and backgrounds to engage in a broad-gauged discussion with Intelligence Community analysts.” In other words, experts were invited to make predictions about things in their areas of expertise, which is Rule #1 in my Rules for Good Futurism.
In conclusion, I enjoyed this report and think the authors used a sound methodology for making future predictions. As a result, I’m planning to write a blog analysis of the latest sequel, the DNI’s 2017 Global trends: Paradox of progress, which predicts world events out to 2035.
If you’re interested in learning more about the 2020 report, read my notes on it below and key quotes I copied (which I’ve organized by country and subject), or read the report in full.
“The United States, too, will see its relative power position eroded, though it will remain in 2020 the most important single country across all the dimensions of power.” Yes, but an easy prediction to make.
“While no single country looks within striking distance of rivaling US military power by 2020…” Right.
“US dependence on foreign oil supplies also makes it more vulnerable as the competition for secure access grows and the risks of supply side disruptions increase.” Missed fracking! Also mentioned this in a non-U.S. section: “Thus sharper demand-driven competition for resources, perhaps accompanied by a major disruption of oil supplies, is among the key uncertainties.”
East and South Asia.
Right about rapid growth in China and India. Report correctly predicted that China would grow faster than India from 2005-20. Size of that gap might have surprised them. Not a good idea to constantly mention “China and India” together.
Predictions about huge growth in China’s middle class, overall purchasing power, and standards of living (like car ownership levels and frequency of overseas travel) were right.
“Meanwhile, the crisis over North Korea is likely to come to a head sometime over the next 15 years.” Another in a long history of failed predictions about its collapse.
“The possession of chemical, biological, and/or nuclear weapons by Iran and North Korea and the possible acquisition of such weapons by others by 2020 also increase the potential cost of any military action by the US against them or their allies.” North Korea did first nuclear test in October 2006. Iran has been dissuaded thanks to hardball diplomacy and direct intervention (nuclear computer virus, assassinations of leading people)–for now.
“By 2020, globalization could be equated in the popular mind with a rising Asia, replacing its current association with Americanization.” Accurate. The U.S. is retrenching under Trump, but China’s global reach is still expanding through its Belt and Road Initiative (created in 2013) and other large investments in Africa and almost everywhere else.
“What Would An Asian Face on Globalization Look Like? …Asian finance ministers have considered establishing an Asian monetary fund that would operate along different lines from IMF, attaching fewer strings on currency swaps and giving Asian decision-makers more leeway from the “Washington macro-economic consensus.”” China founded the Asian Infrastructure Investment Bank in 2015 as a direct rival to the IMF. …An expanded Asian-centric cultural identity may be the most profound effect of a rising Asia. Asians have already begun to reduce the percentage of students who travel to Europe and North America with Japan and—most striking—China becoming educational magnets. A new, more Asian cultural identity is likely to be rapidly packaged and distributed as incomes rise and communications networks spread. Korean pop singers are already the rage in Japan, Japanese anime have many fans in China, and Chinese kung-fu movies and Bollywood song-and-dance epics are viewed throughout Asia. Even Hollywood has begun to reflect these Asian influences—an effect that is likely to accelerate through 2020.” U.S. pop culture still reigns supreme globally, and in spite of spending huge amounts of money, China has had little success making films, music, or other cultural products that outsiders like. However, China’s influence has grown anyway, and disturbing examples include the recent, high-profile instances of China pressuring U.S. sports and entertainment companies to self-censor.
“The regional experts felt that the possibility of major inter-state conflict remains higher in Asia than in other regions. In their view, the Korean Peninsula and Taiwan Strait crises are likely to come to a head by 2020, risking conflict with global repercussions. At the same time, violence within Southeast Asian states—in the form of separatist insurgencies and terrorism—could intensify. China also could face sustained armed unrest from separatist movements along its western borders.” The crises did not come to a head! Important to pay attention to these failed predictions. Maybe they’ll continue to fail forever, and there will not be violent resolutions to Korea and Taiwan (expert predictions about inevitable U.S.-Soviet war were also wrong). The insurgency in Xinjiang did worsen, but China crushed it with martial law and reeducation camps. Russians also crushed Chechen insurgency. Sad testimony about the effectiveness of government repression? Even more effective in the future thanks to mass surveillance tech?
“Asia is particularly important as an engine for change over the next 15 years…Both the Korea and Taiwan issues are likely to come to a head, and how they are dealt with will be important factors shaping future US-Asia ties as well as the US role in the region…Japan’s position in the region is also likely to be transformed as it faces the challenge of a more independent security role.” None of that happened. Japan never transitioned from its isolationist, defensive posture to an international role that was more active and independent of the U.S. Japan’s alliance with the U.S. remains its most important and defining interstate relationship.
“China and India, which lack adequate domestic energy resources, will have to ensure continued access to outside suppliers; thus, the need for energy will be a major factor in shaping their foreign and defense policies, including expanding naval power. …Beijing’s growing energy requirements are likely to prompt China to increase its activist role in the world—in the Middle East, Africa, Latin America, and Eurasia. In trying to maximize and diversify its energy supplies, China worries about being vulnerable to pressure from the United States which Chinese officials see as having an aggressive energy policy that can be used against Beijing.” Correct. A big reason for the Belt and Road Initiative is to secure oil and gas supply lines from the Middle East and Central Asia to China. China also launched its first aircraft carrier in 2012 and has sharply expanded and improved its navy since then. While some worry the navy is being built up to take over Taiwan, its equally important purpose will be to protect the oil shipping lanes that run from the Persian Gulf to China’s coast.
China’s sex ratio imbalance has not caused major problems as the report suggested might happen. Again, China proved more stable and its government more able to deal with problems than outsiders worried.
Report’s hopes of China taking steps towards democracy were dashed. Instead, Chinese government has effectively placated its populace with economic growth, security, and propagandization. China’s success has put forth what might be a viable political / economic / social alternative to Western liberal democracy, and I believe the former’s appeal is one reason why global democratization has slowed. Dictators see there is another way.
“The so-called “third wave” of democratization may be partially reversed by 2020—particularly among the states of the former Soviet Union and in Southeast Asia, some of which never really embraced democracy.” It happened. The Baltic states remain firmly democratic, Ukraine is a dysfunctional democracy where life is bad for most people, and all the others are undemocratic. Also, in SE Asia, Thailand democracy failed but Myanmar’s blossomed. No overall trend.
Correctly predicted that Russia would be stuck in neutral thanks to demographic decline, corruption, lack of foreign investment, and problems with its neighbors. However, incorrectly predicted that the conflicts would be with its Central Asian neighbors and about radical Islam, when in fact Russia fought with Ukraine and Georgia over geopolitics. (Not the only set of experts from that era who worried about Central Asian stability. Were they all fundamentally wrong, or has the problem just been delayed thanks to luck or some other temporary factor?) Russia’s relations with West got much worse than the report predicted thanks to the latter not tolerating the aggression. The report seems to have underestimated how fast Russia would recover from the torpor of the 90s, and its determination to not let more satellite states slip away to the West.
“In the view of the experts, Central Asian states are weak, with considerable potential for religious and ethnic conflict over the next 15 years. Religious and ethnic movements could have a destabilizing impact across the region.” Hasn’t happened…yet. Broader trend I’m seeing is underestimation of how powerful and competent secular dictatorships are at stamping out dissent. Look at failure of Arab Spring, particularly how it was crushed in Bahrain, and at how the military restored the status quo ante in Egypt. Also note the failure of the Iranian uprisings.
“Eurasia, especially Central Asia and the Caucasus, probably will be an area of growing concern, with its large number of potentially failing states, radicalism in the form of Islamic extremism, and importance as a supplier or conveyor belt for energy supplies to both West and East. The trajectories of these Eurasian states will be affected by external powers such as Russia, Europe, China, India and the United States, which may be able to act as stabilizers. Russia is likely to be particularly active in trying to prevent spillover, even though it has enormous internal problems on its own plate. Farther to the West, Ukraine, Belarus, and Moldova could offset their vulnerabilities as relatively new states by closer association with Europe and the EU.”
“If Russia fails to diversify its economy, it could well experience the petro-state phenomenon of unbalanced economic development, huge income inequality, capital flight, and increased social problems.” It happened. Russians have rallied around Putin, however, and have endured the effects of Western sanctions admirably. Part of this owes to the effectiveness of Russian government propaganda at convincing Russians to suffer for the Putin’s causes. Sounds like the report underestimated him in 2004.
Europe
“The EU, rather than NATO, will increasingly become the primary institution for Europe, and the role which Europeans shape for themselves on the world stage is most likely to be projected through it.” Right!
The report’s skepticism of E.U. army being created by 2020 was justified. Europeans still have serious problems with military cooperation.
“Over the next 15 years, West European economies will need to find several million workers to fill positions vacated by retiring workers. Either European countries adapt their work forces, reform their social welfare, education, and tax systems, and accommodate growing immigrant populations (chiefly from Muslim countries) or they face a period of protracted economic stasis that could threaten the huge successes made in creating a more United Europe.” They didn’t solve the problem, have protracted economic stasis, and have sharply slowed down the creation of a more United Europe.
“The experts felt that the current welfare state is unsustainable and the lack of any economic revitalization could lead to the splintering or, at worst, disintegration of the European Union, undermining its ambitions to play a heavyweight international role.” Brexit!
Latin America
“Populist themes are likely to emerge as a potent political and social force, especially as globalization risks aggravating social divisions along economic and ethnic lines. In parts of Latin America particularly, the failure of elites to adapt to the evolving demands of free markets and democracy probably will fuel a revival in populism and drive indigenous movements, which so far have sought change through democratic means, to consider more drastic means for seeking what they consider their “fair share” of political power and wealth.” Definitely happened.
Report’s short section on Latin America failed to predict Venezuela’s near-implosion.
Muslim world and Islam
“In particular, political Islam will have a significant global impact leading to 2020, rallying disparate ethnic and national groups and perhaps even creating an authority that transcends national boundaries.” This is an eerily accurate description of ISIS. Since the group was mostly destroyed, the overall threat posed by political Islam at this moment is lower today than it was in 2004, though its unclear if conditions will hold.
“The key factors that spawned international terrorism show no signs of abating over the next 15 years. Facilitated by global communications, the revival of Muslim identity will create a framework for the spread of radical Islamic ideology inside and outside the Middle East, including Southeast Asia, Central Asia and Western Europe, where religious identity has traditionally not been as strong.” The problem has stayed overwhelmingly confined to the Middle East and South Asia. Islamic terrorists have staged high-profile attacks in Europe, but the resulting deaths were dwarfed by the number killed in the Middle East and South Asia.
“Democratic progress could gain ground in key Middle Eastern countries, which thus far have been excluded from the process by repressive regimes. Success in establishing a working democracy in Iraq and Afghanistan—and democratic consolidation in Indonesia—would set an example for other Muslim and Arab states, creating pressures for change.” No real success. Iraq and Afghanistan are highly corrupt democracies that would collapse without direct U.S. military support. Tunisia became democratic, but I have doubts about its long-term survival.
“Reports of growing investment by many Middle Eastern governments in developing high-speed information infrastructures, although they are not yet widely available to the population nor well-connected to the larger world, show obvious potential for the spread of democratic—and undemocratic—ideas.” This happened. The Arab Spring was the “social media revolution,” and ISIS spread its crazed ideas, snuff videos, and terrorist training materials via the internet.
“Most of the regions that will experience gains in religious “activists” also have youth bulges, which experts have correlated with high numbers of radical adherents, including Muslim extremists.
Youth bulges are expected to be especially acute in most Middle Eastern and West African countries until at least 2005-2010, and the effects will linger long after.
In the Middle East, radical Islam’s increasing hold reflects the political and economic alienation of many young Muslims from their unresponsive and unrepresentative governments and related failure of many predominantly Muslim states to reap significant economic gains from globalization.
The spread of radical Islam will have a significant global impact leading to 2020, rallying disparate ethnic and national groups and perhaps even creating an authority that transcends national boundaries. Part of the appeal of radical Islam involves its call for a return by Muslims to earlier roots when Islamic civilization was at the forefront of global change. The collective feelings of alienation and estrangement which radical Islam draws upon are unlikely to dissipate until the Muslim world again appears to be more fully integrated into the world economy.”
The report contains a hypothetical 2020 letter between Muslim fanatics discussing the recent rise of an Islamic caliphate in the Sunni regions of Iraq, and its war against Shi’ites and U.S. military forces. The fictitious letter also says the conflict spurred a million Middle Eastern refugees to flee to the Western world. This is a frighteningly accurate description of actual events in the Middle East and Europe during the 2010s.
“We expect that by 2020 al-Qa’ida will be superceded by similarly inspired Islamic extremist groups, and there is a substantial risk that broad Islamic movements akin to al-Qa’ida will merge with local separatist movements.” Excellent prediction. ISIS and Boko Haram meet the description.
Global terrorism and organized crime
“Strong terrorist interest in acquiring chemical, biological, radiological and nuclear weapons increases the risk of a major terrorist attack involving WMD. Our greatest concern is that terrorists might acquire biological agents or, less likely, a nuclear device, either of which could cause mass casualties. Bioterrorism appears particularly suited to the smaller, better-informed groups. We also expect that terrorists will attempt cyber attacks to disrupt critical information networks and, even more likely, to cause physical damage to information systems.” Terrorists have evidently made no progress on this, though the coronavirus pandemic’s damage will surely inspire terrorists to try harder.
“Over the next 10 to 20 years there is a risk that advances in biotechnology will augment not only defensive measures but also offensive biological warfare (BW) agent development and allow the creation of advanced biological agents designed to target specific systems—human, animal, or crop.” No evidence it happened, though the chaos caused by coronavirus could inspire terrorist groups and crazed individuals to focus on BW. It is possible that Russia, China and other states have used new technology to secretly create deadlier bioweapons. Such weapons programs remain beyond the means of terrorists, but could be supported and concealed by a competent government.
Thankfully, terrorists never got WMDs as the report feared. However, they still wreaked enormous havoc with conventional weapons and tactics–terrorists have killed about 200,000 people since 2004.
“If the growing problem of abject poverty and bad governance in troubled states in Sub-Saharan Africa, Eurasia, the Middle East, and Latin America persists, these areas will become more fertile grounds for terrorism, organized crime, and pandemic disease. Forced migration also is likely to be an important dimension of any downward spiral. The international community is likely to face choices about whether, how, and at what cost to intervene.” Yes, this happened. Muslim fundamentalism like Boko Haram in Africa, Mexican cartels worse than ever, refugee waves going to the U.S. and Europe.
“While vehicle-borne improvised explosive devices will remain popular as asymmetric weapons, terrorists are likely to move up the technology ladder to employ advanced explosives and unmanned aerial vehicles.” Terrorists have tried many times to kill people with UAVs, but been unsuccessful. Our luck won’t hold forever. In 2018, a drone was also used in an attempted assassination of Venezuelan president Maduro.
“We expect that terrorists also will try to acquire and develop the capabilities to conduct cyber attacks to cause physical damage to computer systems and to disrupt critical information networks.” Many small-scale attacks have happened, but we’re still waiting for The Big One. The ability for computer hackers to do things like cause nuclear meltdowns or disable national electric grids has been exaggerated.
“A key cyber battlefield of the future will be the information on computer systems themselves, which is far more valuable and vulnerable than physical systems. New technologies on the horizon provide capabilities for accessing data, either through wireless intercept, intrusion into Internet-connected systems, or through direct access by insiders.” This definitely happened. Since 2004, there have been too many big hacking incidents, in which troves of sensitive data and electronic assets were stolen. Also remember the high-profile data dumps on Wikileaks, including those courtesy of Edward Snowden.
“Organized crime is likely to thrive in resource-rich states undergoing significant political and economic transformation, such as India, China, Russia, Nigeria, and Brazil as well as Cuba, if it sees the end of its one-party system.” If Boko Haram is considered a mafia, then it did indeed get quite bad in Nigeria. Didn’t happen in the others though. Brazil is about as bad as ever. Report missed Mexico becoming a global center of organized crime. Cartel activity and the national murder rate shot up a few years after the report was published.
Globalization, nationalism and populism
“Some aspects of globalization—such as the growing global interconnectedness stemming from the information technology (IT) revolution— almost certainly will be irreversible. Yet it is also possible, although unlikely, that the process of globalization could be slowed or even stopped, just as the era of globalization in the late 19th and early 20th centuries was reversed by catastrophic war and global depression.” Globalization has definitely slowed. Consider Trump’s election, Brexit, growing resistance among Europeans to strengthening the E.U., the death of free trade deals like Doha, growing isolation and hostility of Russia.
“The transition will not be painless and will hit the middle classes of the developed world in particular, bringing more rapid job turnover and requiring professional retooling. Outsourcing on a large scale would strengthen the antiglobalization movement. Where these pressures lead will depend on how political leaders respond, how flexible labor markets become, and whether overall economic growth is sufficiently robust to absorb a growing number of displaced workers.” Yes, this is now a major political issue throughout the world. It’s unclear if the U.S. has permanently changed course or if Trump’s election just hit the Pause button on the U.S. outsourcing more jobs and importing more immigrant labor.
“Currently, about two-thirds of the world’s population live in countries that are connected to the global economy. Even by 2020, however, the benefits of globalization won’t be global. Over the next 15 years, gaps will widen between those countries benefiting from globalization—economically, technologically, and socially—and those underdeveloped nations or pockets within nations that are left behind. Indeed, we see the next 15 years as a period in which the perceptions of the contradictions and uncertainties of a globalized world come even more to the fore than is the case today.” Yes. Note the rise of populist, nationalist political parties and talking heads, and the new, near-constant focus on “inequality” in the press.
“Populist themes are likely to emerge as a potent political and social force, especially as globalization risks aggravating social divisions along economic and ethnic lines. In parts of Latin America particularly, the failure of elites to adapt to the evolving demands of free markets and democracy probably will fuel a revival in populism and drive indigenous movements, which so far have sought change through democratic means, to consider more drastic means for seeking what they consider their “fair share” of political power and wealth.” Definitely happened.
“What Could Derail Globalization? The process of globalization, powerful as it is, could be substantially slowed or even stopped. Short of a major global conflict, which we regard as improbable, another large-scale development that we believe could stop globalization would be a pandemic…”
World economy
The report gives figures for “GNP,” but the metric is now known as “GNI.”
“Barring such a turn of events, the world economy is likely to continue growing impressively: by 2020, it is projected to be about 80 percent larger than it was in 2000, and average per capita income will be roughly 50 percent higher. Of course, there will be cyclical ups and downs and periodic financial or other crises, but this basic growth trajectory has powerful momentum behind it.” Missed the 2008 Great Recession, but then again, so did everybody. Regardless, the estimate was basically right. 2000 Gross world product (GWP) was $50 trillion while 2019 GWP was $87 trillion, meaning it grew 74% (note: figures are adjusted for inflation). The extra 6% growth we failed to achieve might owe to the Great Recession.
Technology
“The Internet in particular will spur the creation of even more global movements, which may emerge as a robust force in international affairs.” The Arab Spring was driven by young people with cell phones and social media. More generally, social media empowers people to organize and petition about all kinds of things, big and small, and to effectively pressure powerful people to do things.
“Moreover, future technology trends will be marked not only by accelerating advancements in individual technologies but also by a force-multiplying convergence of the technologies—information, biological, materials, and nanotechnologies—that have the potential to revolutionize all dimensions of life. Materials enabled with nanotechnology’s sensors and facilitated by information technology will produce myriad devices that will enhance health and alter business practices and models. Such materials will provide new knowledge about environment, improve security, and reduce privacy. Such interactions of these technology trends—coupled with agile manufacturing methods and equipment as well as energy, water, and transportation technologies—will help China’s and India’s prospects for joining the “First World.” Both countries are investing in basic research in these fields and are well placed to be leaders in a number of key fields. Europe risks slipping behind Asia in creating some of these technologies. The United States is still in a position to retain its overall lead, although it must increasingly compete with Asia and may lose significant ground in some sectors.” What are “nanotechnology’s sensors”? Can’t really assess the prediction without knowing what that means. The smartphone revolution happened after this was written, and the devices contain many sensors that “have nanotechnology.” Neither China nor India are in the First World yet, but the former has made major strides improving its technology and even taking the lead in some niches.
“New technology applications will foster dramatic improvements in human knowledge and individual well-being. Such benefits include medical breakthroughs that begin to cure or mitigate some common diseases and stretch lifespans, applications that improve food and potable water production, and expansion of wireless communications and language translation technologies that will facilitate transnational business, commercial, and even social and political relationships.” The predicted computer-related advances happened, but progress in medical technology has been disappointing. Over the last 16 years, we’ve discovered that biology is messier, more complex, and less amenable to manipulation than software.
“The media explosion cuts both ways: on the one hand, it makes it potentially harder to build a consensus because the media tends to magnify differences; on the other hand, the media can also facilitate discussions and consensus-building.” The first point has outweighed the other, and misinformation, disagreement, and social fragmentation have probably never been worse. The authors couldn’t have known.
“Growing connectivity also will be accompanied by the proliferation of transnational virtual communities of interest, a trend which may complicate the ability of state and global institutions to generate internal consensus and enforce decisions and could even challenge their authority and legitimacy. Groups based on common religious, cultural, ethnic or other affiliations may be torn between their national loyalties and other identities. The potential is considerable for such groups to drive national and even global political decisionmaking on a wide range of issues normally the purview of governments.” Accurate. It has made people more tribal and fragmented.
Misc.
“The likelihood of great power conflict escalating into total war in the next 15 years is lower than at any time in the past century, unlike during previous centuries when local conflicts sparked world wars.” Quite true. I think it will get slightly higher over the next 15 as China closes some of the military power gap with the U.S.
“Countries without nuclear weapons—especially in the Middle East and Northeast Asia—might decide to seek them as it becomes clear that their neighbors and regional rivals are doing so.” There have been no concrete steps in that direction. The U.S. has successfully assured Japan and South Korea they are under its nuclear umbrella, so they haven’t started their own nuclear programs in response to North Korea getting the bomb. Also, since Iran has been dissuaded/blocked from building nukes (this counter-effort was probably more successful than the report authors would have predicted), its neighbors haven’t tried building their own.
“Both North Korea and Iran probably will have an ICBM capability well before 2020” North Korea does; Iran does not.
“By 2020, China and Nigeria will have some of the largest Christian communities in the world, a shift that will reshape the traditionally Western-based Christian institutions, giving them more of an African or Asian or, more broadly, a developing world face.” I don’t think this happened.
“Over the next 15 years, democratic reform will remain slow and imperfect in many countries due to a host of social and economic problems, but it is highly unlikely that democracy will be challenged as the norm in Africa.” Was right.
“We foresee a more pervasive sense of insecurity, which may be as much based on psychological perceptions as physical threats, by 2020. The psychological aspects, which we have addressed earlier in this paper, include concerns over job security as well as fears revolving around migration among both host populations and migrants.” I wholeheartedly agree that a large share of today’s popular anxiety is psychological and not tangible in nature. Threats are commonly being exaggerated and even manufactured to keep average people fearful, tragically distracting them from the fact that this is the best time to be alive in human history for most types of people. The cause is a toxic nexus between the darker aspects of human nature and the profit-driven incentives of news media outlets.
However, the concept of an intelligent machine uprising dates to 1872, when English writer Samuel Butler published the book Erehwon. In it, the main character visits a futuristic, closed society that banned machines because they were improving too fast and people feared they would become smarter than humans and take over. Butler was inspired by Darwin’s Theory of Evolution and by the rapid industrialization he saw in England over his lifetime. https://www.marxists.org/reference/archive/butler-samuel/1872/erewhon/ch23.htm
“People occlusion” is an awesome new phrase. This technique, coupled with better object recognition algorithms, will lead to a revolution in augmented reality. https://www.youtube.com/watch?v=vkS-VqAss4s
A new machine can pump oxygenated blood into donor hearts and lungs, keeping them viable for transplant several hours longer than the current maximum. Technologies like this will someday benefit human cryonics. https://www.bbc.com/news/uk-england-cambridgeshire-51975351
In the U.S., black people might have higher blood pressure than whites because the former have more skin pigment, which blocks UV light from entering skin cells. When light enters those cells, it triggers the release of nitric oxide into the bloodstream, which lowers blood pressure. The blood pressure disparity partly explains why whites live longer than blacks. https://www.outsideonline.com/2411055/free-fitness-apps-online-classes-programs
More on the project to map all the world’s seafloor by 2030: 75% of it will only be mapped to a measly fidelity of 1 depth measurement per 400 x 400 meter grid square. https://www.mdpi.com/2076-3263/8/2/63/html
“There are physical limits to how small we can make [information] storage particles…Once we conquer the ultimate small storage particle, we will be able to set standards – both standards for information and standards for storage.” https://futuristspeaker.com/future-scenarios/the-future-of-libraries/
Self-replicating Bracewell probes might be ideal for exploring and monitoring the galaxy. They would have limited AI and downgraded technology, and would only be able to make copies of themselves, transmit data back to the home planet, and talk to other intelligent species if certain conditions were meant. Such probes would be too handicapped to start thinking for themselves and turn against the home planet, and if one were captured or destroyed, it wouldn’t be much of a loss since it would only contain second-rate technology and no information about the home planet. https://en.wikipedia.org/w/index.php?title=Bracewell_probe&oldid=908951238
The ongoing coronavirus quarantine reveals how autonomous, electric cars will improve things: in many cities, air pollution and traffic jams have nearly disappeared because people aren’t driving. The skies are bluer in Los Angeles than many residents can remember. https://www.nytimes.com/interactive/2020/03/22/climate/coronavirus-usa-traffic.html
The White House announced at a press conference that coronavirus will probably kill 100,000 – 240,000 Americans. That’s actually not the worst-case scenario, as it is built on assumptions that the strict quarantine measures stay in place. https://www.politico.com/news/2020/03/31/trump-briefing-coronavirus-158079
In early 2015, Bill Gates gave a TED Talk about the world’s unreadiness for a pandemic. The scenario he described was almost a dead ringer for today’s coronavirus outbreak. https://www.youtube.com/watch?v=6Af6b_wyiwI
Gates was probably citing this statement Elon Musk made three months earlier: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.” https://bigthink.com/ideafeed/elon-musk-we-should-be-very-careful-about-artificial-intelligence
The superstructure jutting up from an aircraft carrier’s deck is called it’s “island,” and it is full of human crewmen whose jobs require them to see the vessel’s surroundings. One specialized compartment, called the “island camera room,” is there so a person can video record aircraft takeoffs and landings for safety and training reasons. The latest U.S. carriers have deleted the room and replaced with with CCTV cameras that a person monitors from an office room below decks. Would a fully automated aircraft carrier need anything more than a skeletal tower with cameras and other sensors mounted on it as its island? https://www.thedrive.com/the-war-zone/32614/heres-what-this-panoramic-windowed-room-does-on-american-aircraft-carriers
North Korean fighter plane squadrons secretly fought U.S. planes during the Vietnam War. ‘Vietnamese pilot Dinh said of the Koreans: “They kept everything secret, so we didn’t know their loss ratio, but the North Korean pilots claimed 26 American aircraft destroyed. Although they fought very bravely in the aerial battles, they were generally too slow and too mechanical in their reactions when engaged, which is why so many of them were shot down by the Americans. They never followed flight instructions and regulations either.”‘ https://nationalinterest.org/blog/buzz/yes-north-korea-sent-jets-and-pilots-fight-america-vietnam-134227
Smart bombs keep getting smarter. The “BLU-129” is a standard-sized bomb (500 lbs and 7 ft. long), but the size of its explosion can be dialed up or down by the bomber crew, even after they’ve dropped it. This lets us minimize collateral damage if the bomb is dropped, but a few seconds before it hits, a little kid walks into the target area. https://nationalinterest.org/blog/buzz/blu-12-bomb-air-forces-new-aerial-sniper-129187
Video of a low-flying, supersonic jet shattering the windows of buildings. Sonic booms are one of the main reasons supersonic passenger jets never became popular. https://youtu.be/2eoTqLnL0WI
‘A former F-16 pilot, Lee also has 1,500 hours in the [F-4] Phantom. He still recalls the first time he took to the air in one. “I was shocked at how much more difficult it was to fly than I thought it would be,” he told me. “When I got home, I told my wife, ‘I think I just traded in a Porsche for a ’72 Cadillac.”‘ https://www.airspacemag.com/military-aviation/where-have-all-the-phantoms-gone-96320627/
Here’s a fascinating article on “rarefaction wave” (RAVEN) guns, which are tank cannons that vent gas out of their backs kind of like recoilless rifles (e.g. – bazookas). If RAVEN weapons are fully developed, they could let small, light tanks fire powerful shells that only today’s heavy tanks can shoot. ‘A general rule of thumb, according to Technology of Tanks, from Jane’s, is that a vehicle needs to weigh about one ton for every nine hundred newtons of force exerted on it. This means for the current 120-millimeter M256 cannon shooting a M829A3 Anti-Tank Shell, a vehicle would have to weigh twenty-five tons to withstand the recoil force.’ Interestingly, that means a tank as small as a T-55 (weighs 36 tons) could be retrofitted with the same, powerful cannon as the U.S. M1 Abrams. https://nationalinterest.org/blog/the-buzz/the-us-army-wants-put-big-guns-small-tanks-23041