Chinese scientists have figured out how to differentiate Uyghurs, Tibetans, and Koreans from their facial features. https://doi.org/10.1002/widm.1278
Elon Musk says he will launch humans to Mars in 2026, and all new cars produced in 2030 will have fully autonomous driving capabilities. https://youtu.be/fjLa834mv8Q
“Over the longer term, perhaps in another 15 or 20 years, you will see a complete transformation in therapeutic medicine, because every pharmaceutical company is investing, and every biotech company is also contributing to the development of new targets for drug therapy, based upon the genome. And the therapies that we use 15 or 20 years from now will be directed much more precisely towards the molecular problem in things like cancer, or mental illness, than anything that we currently have available.” –Francis Collins during a 2000 speech about the future of genetic medicine https://web.ornl.gov/sci/techresources/Human_Genome/project/clinton3.shtml
“Ecobots” are robots that can eat organic matter and turn the energy into electricity. The most advanced model can even poop. In the distant future, I think some robots and posthumans will be able to derive energy from the full range of organic and synthetic sources, as this will be the most versatile setup (very hard to “starve” if you can eat plants AND plug in to an electrical outlet). https://www.treehugger.com/scientists-invent-robot-that-eats-organic-matter-then-poops-4860413
Manmade objects now outweigh all the plants and animals on Earth. By weight, most of our manufactured goods come in the form of concrete, bricks and other building materials comprising structures and infrastructure. https://www.bbc.com/news/science-environment-55239668
It’s hard to remember now, but early in the pandemic, there was some hope that it would be over by the end of 2020, thanks to a vaccine being invented with surprising speed, or to the virus somehow turning out to not be as bad as expected. On March 31, Dr. Fauci flatly rejected those hopes by predicting that there would be a serious second wave in the fall, which happened. https://www.dailymail.co.uk/news/article-8171657/Fauci-expects-America-suffer-coronavirus-outbreak-fall.html
Tyler Cowen’s prediction from April was basically right. The U.S. never had a national lockdown strategy and muddled through the year with a mix of state-level strategies that “yo-yoed” based on infection levels. It’s now the fall, and individual COVID-19 survival rates are higher because we’ve learned better ways to treat it, but overall deaths are higher than ever because so many more people are getting infected. We have vaccines (and it sounds like their invention might have come earlier than Cowen predicted), but distribution has just started, and his point that it will take a long time to inoculate the American population will hold true. https://marginalrevolution.com/marginalrevolution/2020/04/where-we-stand.html
Bill Gates doesn’t think the COVID-19 related restrictions in the U.S. will completely go away for 12 to 18 months. Everything hinges on how many people get vaccinated, and how quickly. https://youtu.be/dCt23D8VXpc?t=473
This is the fourth…and LAST…entry in my series of blog posts analyzing the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. You can view the previous installments of this series here:
“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”
MOSTLY RIGHT
Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.
While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.
If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.
Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.
“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”
MOSTLY RIGHT
Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.
Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.
In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.
At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.
“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”
RIGHT
In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.
Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.
During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.
With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).
Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.
“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”
RIGHT
It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.
In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”
The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.
There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.
“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”
PARTLY RIGHT
As I said in part 2 of this review, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.
From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.
This means the prediction was partly right, but also of questionable significance in the first place.
“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”
MOSTLY RIGHT
In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.
In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)
Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:
Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).
Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.
Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:
And they have seen the last light fail; By day they kneel and pray; But, still they turn and gaze upon The face of God to-day.
And God is touched and weeps anew For the lost souls around; And sorrow turns their pale and blue, And comfort is not found.
They have not mourned in the world of men, But their hearts beat fast and sore, And their eyes are filled with grief again, And they cease to shed no tear.
And the old men stand at the bridge in tears, And the old men stand and groan, And the gaunt grey keepers by the cross And the spent men hold the crown.
And their eyes are filled with tears, And their staves are full of woe. And no light brings them any cheer, For the Lord of all is dead
In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.
“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”
UNCLEAR
It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?
Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)
If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.
“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”
WRONG
Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.
That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).
Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.
“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”
MOSTLY WRONG
Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.
In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.
Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.
“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.
Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.
Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.
“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”
UNCLEAR
What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.
What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.
“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).
With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.
If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.
If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.
The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.
Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.
“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”
PARTLY RIGHT
The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.
The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.
However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.
Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.
The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.
“The expected life span…[is now] over one hundred.”
WRONG
The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.
“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”
WRONG
Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.
Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.
As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.
Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.
“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”
RIGHT
The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.
Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.
“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”
MOSTLY RIGHT
Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.
By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.
The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).
A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.
Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.
Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.
“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”
RIGHT
In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:
Kurzweil: How old are you? Eugene: I’m a little boy. 13 years old.
Kurzweil: How much is two plus two? Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now? Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Kurzweil: I live in the capital of the United States. Do you know where that is? Eugene: Every nerd knows that the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that put a man on the moon. Eugene: Well, if you want it, the capital of The United States is Washington.
Kurzweil: I live in the capital of the country that built the great wall. Eugene: Tell me more about Capital. I like to learn about different places!
In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.
“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”
RIGHT
The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:
Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.
Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.
“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”
RIGHT
Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.
Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.
Well, well, well…that’s it. I have finally come to the end of my project to review Ray Kurzweil’s predictions for 2019. This has been the longest single effort in the history of my blog, and I’m glad the next round of his predictions pertains to 2029, so I can have time to catch my breath. I would say the experience has been great, but like the whole year of 2020, I’m relieved to be able to turn the page and move on.
Happy New Year!
Links:
Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could. https://www.bbc.com/news/business-47700701
In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve. https://www.factcheck.org/2013/03/bloombergs-obesity-claim/
“Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”? https://en.wikipedia.org/wiki/Auto-Tune
The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks. https://www.ersj.eu/journal/1245