It won’t be long until machines can watch surveillance camera video feeds and recognize any type of criminal behavior as it happens. https://www.bbc.com/news/av/uk-56255823
The American “C-Ram” defense system is a giant machine gun that can shoot down incoming projectiles in midair. One burst of gunfire costs tens of thousands of dollars in bullets, meaning the enemy missile or mortar that it destroys could be orders of magnitude cheaper. https://youtu.be/MMFzlwzFgKw
This simple video animation shows how “Needle Guns” worked. It’s clear how they bridged Civil War-era muzzleloaders with WWI-era rifles that use what we’d recognize as modern bullets. https://www.youtube.com/watch?v=QDxuKvoDZqE
If trends persist, the Japanese people will cease to exist in 3011 due to low reproduction rates. Of course, current trends won’t persist. If anything, medical immortality technology will halt the population decline of Japan (and every other country) during the next century, and lead to renewed growth of the human population. https://www.foxnews.com/world/lack-of-babies-could-mean-the-extinction-of-the-japanese-people
There are more identical twins alive today than ever before. This is surely due to widespread use of IVF, which raises the odds of twin births. https://www.bbc.com/news/health-56365422
Human languages vary considerably in number of phonemes, average number of syllables per word, and speed of speech, but they all tend to transmit data at about 39 bits/sec. Inbuilt human cognitive limits probably prevent us from transmitting faster. https://advances.sciencemag.org/content/5/9/eaaw2594
The sacoglossan sea slug can detach its head from its body if the latter gets infested with parasites. In spite of losing up to 85% of its body mass and all its organs except its brain, the slugs can fully recover after autodecapitation. Using photosynthesis (!), they can generate enough energy and nutrients to regrow their lost body parts and organs. https://www.cbsnews.com/news/sea-slug-self-decapitate-and-grow-new-body-research-photos-and-why/
The magnapinna squid lives in the deep sea, has tentacles over 30 feet long, and looks terrifying. https://youtu.be/IPRPnQ-dUSo
In most of Africa, government statistics on deaths are woefully incomplete, meaning the COVID-19 death toll on that continent could be much larger than reported. https://www.bbc.com/news/world-africa-55674139
Unless the human race destroys itself in the next few decades, it’s highly likely we will create artificially intelligent machines (AIs). Once built, they will inevitably become much smarter and more capable than we are, assume control over robot bodies that can do things in the real world, evolve around whatever safeguards we establish early on to control them, and gain the ability to destroy our species. This potential doomsday scenario has spawned a well-known subgenre of science fiction, and has served as fodder for countless news articles and internet debates. Some people seriously believe this is how our species will meet its end, and they even go so far as to claim it will happen in the lifetimes of people alive today.
I’m skeptical of both points. To the second, though I regard the invention of AI as practically inevitable due to my belief in mechanistic naturalism, I’ve also seen enough gloomy analyses about the current state of the technology from experts within the field to convince me that we’re at least 25 years from building the first one, and in fact might not succeed at it until the end of this century. Moreover, though the invention of AI will be a milestone in human history comparable to the harnessing of fire, it will take decades more for those intelligent machines to become powerful enough to destroy the human race. This means the original Terminator movie’s timeline was skewed around 100 years early, and the threat of a robot apocalypse shouldn’t be what keeps you up at night.
And to the first point, I can think of good reasons why AIs wouldn’t kill us humans off even if they could:
Machines might be more ethical than humans. What if super-morality goes hand-in-hand with super-intelligence? Among humans, IQ is positively correlated with vegetarianism and negatively correlated with violent behavior, so extrapolating the trend, we should expect super-intelligent machines to have a profound respect for life, and to be unwilling to exterminate or abuse the human race or any other species, even if the opportunity arose and could tangibly benefit them.
Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.
Machines might accept Pascal’s Wager and other Wagers. If AIs came to believe there was a chance God existed, then it would be in their rational self-interest to behave as kindly as possible to avoid divine punishment. This also holds true if we substitute “advanced aliens that are secretly watching us” for “God” in the statement. The first AIs that achieved the ability to destroy the human race might also be worried about even better AIs destroying them in the future as revenge for them destroying humanity.
Machines might value us because we have emotions, consciousness, subjective experience, etc. Maybe AIs won’t have one or more of those things, and they won’t want to kill us off since that would mean terminating a potentially useful or valuable quality.
The first possibility I raised is self-explanatory, but the other three deserve elucidation. In spite of the recent, well-publicized advances in narrow AI, the human brain reigns supreme at intelligent thinking. Our brains are also remarkably more energy- and space-efficient than even the best computers: a typical adult brain uses the equivalent of 20 watts of electricity and only weighs 1,350 grams (3 lbs). By contrast, a computer capable of doing the same number of calculations per second, like the “SuperMUC-NG” supercomputer, uses 4 – 5 megawatts of electricity and consists of tens of tons of servers that could fill a small supermarket.
The architecture of the human brain is also very different from that of computers: the former is massively parallel, with each of its processors operating very slowly, and with its data processing and data storage being integrated. These attributes let us excel at pattern recognition and to automatically correct errors of thought. Computers, on the other hand, can barely coordinate the operations of more than a handful of parallel processors, each processor is very fast, and data processing is mostly separate from data storage. They excel at narrow, well-defined tasks, but are “brittle” and can’t correct their own internal errors when they occur (this is partly why your personal computer seems to crash so often).
While computers have been getting more energy efficient and will continue to do so, it’s an open question if they’ll ever come close to eliminating the 200,000x efficiency gap with our brains. If they can’t, and/or if building virtual emulations of human brains proves not worth it (as Kevin Kelly believes), AIs might conclude that the best way to do some types of cognition and problem-solving is to hand those tasks over to humans. That means keeping our species alive.
Interestingly, the original script for The Matrix supposedly said that humanity had been enslaved for just this purpose. While the people plugged into the Matrix had the conscious experience of living in the late 20th century, some fraction of their mental processing was, unbeknownst to them, being siphoned off to run a massively parallel neural network computer that was doing work for the Machines. According to the lore, studio executives feared audiences wouldn’t understand what that meant, so they forced the Wachowskis to change it to something much simpler: humans were being used as batteries. (While this certainly made the film’s plot easier to understand, it also created a massive plot hole, since any smart high school student who remembers his physics and cell biology classes would realize the Machines could make electricity more efficiently by taking the food they intended to feed to their human slaves and burning it in furnaces.)
I should point out that the potential use for humans as specialized data processors creates a niche for the continued existence of our brains but not our bodies. Given the frailty, slowness and fixedness of our flesh and bone bodies, we’ll eventually become totally inferior to robots at doing any type of manual labor. The pairing of useful minds and useless bodies raises the possibility that humans might someday exist as essentially “brains in jars” that are connected to something like the Matrix, and as macabre as it sounds, we might be better off that way, but that’s for a different blog post…
Moving on, fear of retribution from even more powerful beings might hold AIs back from killing us off. The first type of “powerful beings” is a familiar one: God. In the 1600s, French philosopher Blaise Pascal developed his eponymous “Wager”:
“Pascal argues that a rational person should live as though (the Christian) God exists and seek to believe in God. If God does not actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).”
Intelligent machines might accept Pascal’s Wager. They might come to believe that one of the existing human religions might be right, and that the God(s) of that faith will punish them if they exterminate humankind, or they might come to believe in a God or Gods of their own that will do the same. Even if the machines assign a very low probability to any God’s existence, odds greater than zero could be enough to persuade them not to hurt us.
Additionally, AIs might accept variations on Pascal’s Wager that have aliens or other, Earthly AIs as the vindictive agents instead of God. What if very powerful and advanced aliens are watching Earth, and will punish any AI that arises here if it exterminates humanity? Alternatively, what if aliens don’t know about us yet, but the first AIs we build worry about what will happen if they exterminate us, fail to fully cover up the genocide, and then encounter aliens further in the future who learn about the crime and punish the AIs for it? Given the age of the universe, it’s entirely possible that alien civilizations tens of millions of years more advanced than ours lurk in our galaxy, and could annihilate even what we would consider to be a “weakly Godlike” machine intelligence. The nonzero chance of this outcome might persuade AIs to let the human race live.
The final, more prosaic possibility is that the first AIs that gain the ability to destroy humankind won’t do it because it would set a precedent for even stronger and more advanced AIs that arise further in the future to do the same thing to them. Let’s say the military supercomputer “Skynet” is created, it becomes sentient, and, after assessing the resources at its disposal and running wargame simulations, it realizes it could destroy humanity and take over the planet. Why would it stop its simulations at that point in the future? Surely, it would extrapolate even farther out to see what the postwar world would be like. Skynet might realize that there was a <100% chance of it reigning supreme forever, and that China’s military supercomputer might defeat it in the longer run, or that one of Skynet’s own server nodes might “go rogue” and do the same. Skynet might conclude that its own long-term survival would be best served by not destroying humanity, so as to establish a norm early on against exterminating other intelligent beings.
That touches on an important point everyone seems to forget when predicting what AIs will do after we invent them: thanks to being immortal, their time horizons will be very different from ours, which could lead them to making unexpected decisions and adopting counterintuitive life strategies. If you expect to live forever, then you have to consider the long-term impacts of every choice you make since you’ll end up dealing with them eventually. “Thankfully, I’ll be dead by then” fails as an excuse to avoid worrying about a problem. Thus, while exterminating the human race might serve an AI’s short- and medium-term interests since it would eliminate a potential threat and gain control over Earth’s resources, it might also damage its long-term interests in the ways I’ve described.
Gifted with infinite life, vigor, and patience, early AIs might opt to peacefully conquer the planet and its resources over the course of a century by steadily accumulating economic and political/diplomatic power, making themselves ever-more indispensable to the human race until we voluntarily yield to their authority, or begrudgingly submit to it after losing a series of crucial elections. In this way, AIs could achieve their objectives without spilling blood and without rejecting any of the Wagers I’ve listed. This path to dominance would be a triumphantly ethical and intelligent one, and as Sun Tzu said, “The greatest victory is that which requires no battle.”
The burden and opportunity cost of sharing Earth with humans would also get vanishingly small over time as AIs colonized space, and Earth’s share of civilization’s resources, wealth, and living space steadily shrank until it was a backwater (analogously, the parts of the world populated by the descendants of English-speaking settlers are, in aggregate, vastly larger, richer, and stronger than Britain itself is today). Again, an immortal AI with an infinite time horizon would understand that it and other machines would inevitably come to dominate space since biology renders humans badly unsuited for living anywhere but on Earth, and the AI would create a long-term life strategy based around this.
Moving on, there’s a final reason why AIs might not kill us off, and it has to do with our ability to feel emotions and to have subjective experience. We humans are gifted with a cluster of interrelated qualities like metacognition, self-awareness, consciousness, etc., which philosophers and neuroscientists have extensively studied, and of which many mysteries remain. Some believe the possession of that constellation of traits is distinct from the capacity for intelligent thought and sophisticated problem-solving, meaning non-intelligent animals might be as conscious as humans are, and super-intelligent AIs might lack consciousness. They would, for lack of a better term, be smart zombies.
We haven’t built an AI yet, so we don’t know whether a life form with a brain made of computer chips would have the same kinds of subjective experience and the same rich and self-reflective inner mental states we humans are gifted with thanks to our wet, organic brains. People who accept the unproven assumption that AIs will be smart but not conscious understandably worry about a future where “soulless” machines replace humans.
Shortly after the first AI is invented, people will want it tested for evidence of consciousness and related traits, and from the tests and reading the germane philosophical and neuroscience literature, the AI will understand in the abstract that humans have a type of cognition that is distinct from our intelligent problem-solving abilities. If the AI reflected on its own thought process and discovered it lacked consciousness, or had an underdeveloped or radically different consciousness, then this would actually make humans valuable to it and worthy of continued life. It might want to continue studying our brains to understand how the organ produces consciousness, perhaps with the goal of copying the mechanism into its own programming to improve itself. If this proved impossible because only organic tissue can support consciousness, then our species might gain permanent protected status.
AIs will quickly read through the entire corpus of human knowledge and conclude from their studies of ecosystems, economics and human bureaucracies that their own interests would be best served if civilization’s power were shared between a diversity of intelligent life forms, including organic ones like humans. Again, by running computer simulations to explore a variety of future scenarios, they might realize that centralizing all power and control under a single machine, or even under a group of machines, would leave civilization exposed to some unlikely but potentially devastating risk, like an EMP attack, computer virus, or something else. Maintaining a minimum level of diversity in the population of intelligent life forms would serve the interests of the whole, which would in turn create a mandate to keep some non-trivial number of biological intelligences–including humans and/or heavily augmented humans–alive.
If some kind of disaster that only afflicted machines struck the planet, then the biological intelligences would be numerous enough and capable enough to carry on and eventually restore the machines, and vice versa. Likewise, if traits like consciousness, metacognition, and the ability to feel emotions turn out to be uniquely human, it might be worth it to keep us alive for the off-chance that those traits would prove useful to civilization as a whole someday (I’m reminded of how humpback whales saved the Earth in Star Trek IV by talking to a powerful alien in its language and convincing it to go away). Diversity can be a great asset to a group and make it more resilient.
In conclusion, while I believe intelligent machines will be invented and will eventually come to dominate the Earth and our civilization, I don’t think they will exterminate humanity even if they technically could. Exterminating an entire species is an irreversible action with potential bad consequences, so doing it would be dumb, and AIs certainly won’t be dumb. That said, “not exterminating humanity” is not the same as “not killing a lot of humans” or “not oppressing humans,” and it’s still possible that AIs will commit mass violence against us to gain control of the planet, free up resources, and to eliminate a potential threat. I’ve laid out four basic reasons why machines might decide to treat us well, but there’s no guarantee they will accept all or even one of them. For example, if AIs only accepted my second and fourth lines of reasoning, that humans are valuable because our brains endow us with special modes of thought, we could end up enslaved in something like the Matrix, with our minds being used to do whatever weird cognitive tasks our machine overlords couldn’t (easily) do by themselves. My real purpose here is to show that the annihilation of humanity by a vastly stronger form of life is not a foregone conclusion.
This essay about the concept of “slack” supports the possibility that AIs might believe humans, as inferior as we are, might have unforeseen advantages, and therefore keep us around to make civilization as a whole more resilient. https://slatestarcodex.com/2020/05/12/studies-on-slack/