“The Kremlin is likely preparing to conduct a decisive strategic action in the next six months intended to regain the initiative and end Ukraine’s current string of operational successes.”
It then goes on to say that “strategic action” could take the form of an offensive meant to capture the Donetsk and Luhansk oblasts in eastern Ukraine, or of a strong defensive action meant to defeat the expected Ukrainian counteroffensive. Russia did both of those things over the last six months.
Half of Donetsk remains in Ukrainian hands, though Russia has captured virtually all of Luhansk, including all its cities and large towns. Ukraine’s counteroffensive in the south has made insignificant progress thanks to competent Russian resistance and reinforcement.
The U.S. has agreed to give Ukraine cluster bombs. Though the move is controversial, Russia and Ukraine have already used cluster bombs against each other, and neither is party to the global ban on cluster bombs, nor is America. https://www.hrw.org/news/2023/05/29/cluster-munition-use-russia-ukraine-war
In WWI, the U.S. tried designing its own steel combat helmet. The project independently arrived at a design that was very similar to the German helmet. It was rejected partly because its use could lead to confusion on the battlefield. https://www.metmuseum.org/art/collection/search/35957
In anticipation of wartime shortages, China would build up its stockpiles of energy (mostly oil), key metals, and food (commodities like wheat and soybeans) if it believed war was imminent. This would be the case whether China was planning to attack, or if it believed another country was about to attack it.
Peter Thiel, one decade ago: “If I had to sort of project in the next decade ahead, I think we have to at least be open to the possibility that the computer era is also at risk of decelerating. We have a large ‘Computer Rust Belt’ which nobody likes to talk about. But it is companies like Cisco, Dell, Hewlett-Packard, Oracle, IBM, where I think the pattern will be to become commodities, no longer innovate. Correspondingly, cut through labor force and cut through profits in the decade ahead. There are many companies that are on the cusp: Microsoft is probably close to the Computer Rust Belt. One that’s shockingly and probably in the Computer Rust Belt is Apple Computers.”
Microsoft’s market cap is now $2.5 trillion, and Apple’s is $3 trillion (the first company to cross that threshold). Microsoft has the lead in A.I. technology, and Apple just unveiled the best augmented reality glasses ever made. Out of the tech companies that Thiel named in that quote, only IBM has seen a decline in its stock value since 2013. If you’d bought $10,000 worth of stock in each of those seven companies back then, you’d have like four or five times as much money overall today. https://youtu.be/VtZbWnIALeE?t=549 https://www.cnn.com/2023/06/30/tech/apple-3-trillion-market-valuation/index.html
Hollywood actors and writers have gone on strike for the first time in 43 years. Partly they’re worried that entertainment studios will replace them with CGI clones and machine-written scripts. https://www.bbc.com/news/technology-66200334
A ChatGPT mod allows NPCs in the video game Skyrim to hold conversations with human players. The result is impressive, and leads me to think that games are about to become even more addictive and that a market for creating and preserving custom NPC “friends” is about to arise. https://youtu.be/0svu8WBzeQM
Seven years ago, AI expert François Chollet Tweeted: “the belief that we are anywhere close to human-level natural language comprehension or generation is pure DL hype.”
“Foundation models” are the newest AIs. They are not narrow AIs but also not fully general AIs (AGIs). They can do a limited number of different tasks.
‘The next wave in AI looks to replace the task-specific models that have dominated the AI landscape to date. The future is models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. These are called foundation models, a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence. We’ve seen the first glimmers of the potential of foundation models in the worlds of imagery and language. Early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. Input a short prompt, and the system generates an entire essay, or a complex image, based on your parameters, even if it wasn’t specifically trained on how to execute that exact argument or generate an image in that way.’ https://research.ibm.com/blog/what-are-foundation-models
‘Sulphur particles contained in ships’ exhaust fumes have been counteracting some of the warming coming from greenhouse gases. But lowering the sulphur content of marine fuel has weakened the masking effect, effectively giving a boost to warming.
…While this will contribute to warming and make it even more difficult to avoid exceeding 1.5C in the coming decades, a number of other factors are likely contributing to the ocean heatwave.
In 250 million years, the Earth’s continents will have combined again to form one supercontinent. This, along with other factors, will have a massive and negative effect on the global climate. (From episode 79 of Naked Science) https://www.bilibili.com/video/BV1qb411a7tu/
“A hypercane is a hypothetical class of extreme tropical cyclone that could form if sea surface temperatures reached approximately 50 °C (122 °F), which is 13 °C (23 °F) warmer than the warmest ocean temperature ever recorded.” https://en.wikipedia.org/wiki/Hypercane
The documentary Moment of Contact explores a famous UFO and alien sighting in the town of Varginha, Brazil in 1996. There’s no hard proof it happened, but it’s compelling to see so many credible witnesses still so adamant about what they saw. I don’t like that the filmmakers never mentioned the Brazilian government’s explanation or tried to debunk it. https://youtu.be/0WlbfaMU-Qs
A man who spent 28 years working as a truck driver is now living proof of how sunlight accelerates aging. The left side of his face was constantly exposed to sunlight since it was next to a window, but the right side was protected. The wrinkling and sagging of the skin on his face is correspondingly asymmetrical. The ultraviolet rays in sunlight damage the DNA inside human skin cells. https://www.thesun.co.uk/health/22972152/shocking-photo-shows-sun-damage-face/
I recently shelled out the $100 (!) for a year-long subscription to Sam Harris’ Making Sense podcast, and came across a particularly interesting episode of it that is relevant to this blog. In episode #324, titled “Debating the Future of AI,” Harris interviewed Marc Andreessen (an-DREE-sin) about artificial intelligence. The latter has a computer science degree, helped invent the Netscape web browser, and has become very wealthy as a serial tech investor.
Andreessen recently wrote an essay, “Why AI will save the world,” that has received attention online. In it, Andreessen dismisses the biggest concerns about AI misalignment and doomsday, sounds the alarm about the risks of overregulating AI development in the name of safety, and describes some of the benefits AI will bring us in the near future. Harris read it, disagreed with several of its key claims, and invited Andreessen onto the podcast for a debate about the subject.
Before I go on to laying out their points and counterpoints as well as my impressions, let me say that, though this is a long blog, it takes much less time to read it than to listen to and digest the two-hour podcast. My notes on the podcast also don’t match how it unfolded chronologically. Finally, it would be a good idea for you to read Andreessen’s essay before continuing: https://a16z.com/2023/06/06/ai-will-save-the-world/
Though Andreessen is generally upbeat in his essay, he worries that the top tech companies have recently been inflaming fears about AI to trick governments into creating regulations on AI that effectively entrench the top companies’ positions and bar smaller upstart companies from challenging them in the future. Such a lack of competition would be bad. (I think he’s right that we should be concerned about the true motivations of some of the people who are loudly complaining about AI risks.) Also, if U.S. overregulation slows down AI research too much, China could win the race to create to create the first AI, which he says would be “dark and dystopian.”
Harris is skeptical that government regulation will slow down AI development much given the technology’s obvious potential. It is so irresistible that powerful people and companies will find ways around laws so they can reap the benefits.
Harris agrees with the essay’s sentiment that more intelligence in the world will make most things better. The clearest example would be using AIs to find cures for diseases. Andreessen mentions a point from his essay that higher human intelligence levels lead to better personal outcomes in many domains. AIs could effectively make individual people smarter, letting the benefits accrue to them. Imagine each person having his own personal assistant, coach, mentor, and therapist available at any time. If they used their AIs right and followed their advice, a dumb person could make decisions as well as a smart person.
Harris recently re-watched the movie Her, and found it more intriguing in light of recent AI advances and those poised to happen. He thought there was something bleak about the depiction of people being “siloed” into interactions with portable, personal AIs.
Andreessen responds by pointing out that Karl Marx’ core insight was that technology alienates people from society. So the concern that Harris raises is in fact an old one that dates back to at least the Industrial Revolution. But any sober comparison between the daily lives of average people in Marx’ time vs today will show that technology has made things much better for people. Andreessen agrees that some technologies have indeed been alienating, but what’s more important is that most technologies liberate people from having to spend their time doing unpleasant things, which in turn gives them the time to self-actualize, which is the pinnacle of the human experience. (For example, it’s much more “human” to spend a beautiful afternoon outside playing with your child than it is to spend it inside responding to emails. Narrow AIs that we’ll have in the near future will be able to answer emails for us.) AI is merely the latest technology that will eliminate the nth bit of drudge work.
Andreessen admits that, in such a scenario, people might use their newfound time unwisely and for things other than self-actualization. I think that might be a bigger problem than he realizes, as future humans could spend their time doing animalistic or destructive things, like having nonstop fetish sex with androids, playing games in virtual reality, gambling, or indulging in drug addictions. Additionally, some people will develop mental or behavioral problems thanks to a sense of purposelessness caused by machines doing all the work for us.
Harris disagrees with Andreessen’s essay dismissing the risk of AIs exterminating the human race. The threat will someday be real, and he cites chess-playing computer programs as proof of what will happen. Though humans built the programs, even the best humans can’t beat the programs at chess. This is proof that it is possible for us to create machines that have superhuman abilities.
Harris makes a valid point, but he overlooks the fact that we humans might not be able to beat the chess programs we created, but we can still make a copy of a program to play against the original “hostile” program and tie it. Likewise, if we were confronted with a hostile AGI, we would have friendly AGIs to defend against it. Even if the hostile AGI were smarter than the friendly AGIs that were fighting for us, we could still win thanks to superior numbers and resources.
Harris thinks Andreessen’s essay trivializes the doomsday risk from AI by painting the belief’s adherents as crackpots of one form or another (I also thought that part of the essay was weak). Harris points out that is unfair since the camp has credible people like Geoffrey Hinton and Stuart Russell. Andreessen dismisses that and seems to say that even the smart, credible people have cultish mindsets regarding the issue.
Andreessen questions the value of predictions from experts in the field and he says a scientist who made an important advance in AI is, surprisingly, not actually qualified to make predictions about the social effects of AI in the future. When Reason Goes on Holiday is a book he recently read that explores this point, and its strongest supporting example is about the cadre of scientists who worked on the Manhattan Project but then decided to give the bomb’s secrets to Stalin and to create a disastrous anti-nuclear power movement in the West. While they were world-class experts in their technical domains, that wisdom didn’t carry over into their personal convictions or political beliefs. Likewise, though Geoffrey Hinton is a world-class expert in how the human brain works and has made important breakthroughs in computer neural networks, that doesn’t actually lend his predictions that AI will destroy the human race in the future special credibility. It’s a totally different subject, and accurately speculating about it requires a mastery of subjects that Hinton lacks.
This is an intriguing point worth remembering. I wish Andreessen had enumerated which cognitive skills and areas of knowledge were necessary to grant a person a strong ability to make good predictions about AI, but he didn’t. And to his point about the misguided Manhattan Project scientists I ask: What about the ones who DID NOT want to give Stalin the bomb and who also SUPPORTED nuclear power? They gained less notoriety for obvious reasons, but they were more numerous. That means most nuclear experts in 1945 had what Andreessen believes were the “correct” opinions about both issues, so maybe expert opinions–or at least the consensus of them–ARE actually useful.
Harris points out that Andreessen’s argument can be turned around against him since it’s unclear what in Andreessen’s esteemed education and career have equipped him with the ability to make accurate predictions about the future impact of AI. Why should anyone believe the upbeat claims about AI in his essay? Also, if the opinions of people with expertise should be dismissed, then shouldn’t the opinions of people without expertise also be dismissed? And if we agree to that second point, then we’re left in a situation where no speculation about a future issue like AI is possible because everyone’s ideas can be waved aside.
Again, I think a useful result of this exchange would be some agreement over what counts as “expertise” when predicting the future of AI. What kind of education, life experiences, work experiences, knowledge, and personal traits does a person need to have for their opinions about the future of AI to carry weight? In lieu of that, we should ask people to explain why they believe their predictions will happen, and we should then closely scrutinize those explanations. Debates like this one can be very useful in accomplishing that.
Harris moves on to Andreessen’s argument that future AIs won’t be able to think independently and to formulate their own goals, in turn implying that they will never be able to create the goal of exterminating humanity and then pursue it. Harris strongly disagrees, and points out that large differences in intelligence between species in nature consistently disfavor the dumber species when the two interact. A superintelligent AGI that isn’t aligned with human values could therefore destroy the human race. It might even kill us by accident in the course of pursuing some other goal. Having a goal of, say, creating paperclips automatically gives rise to intermediate sub-goals, which might make sense to an AGI but not to a human due to our comparatively limited intelligence. If humans get in the way of an AGI’s goal, our destruction could become one of its unforeseen subgoals without us realizing it. This could happen even if the AGI lacked any self-preservation instinct and wasn’t motivated to kill us before we could kill it. Similarly, when a human decides to build a house on an empty field, the construction work is a “holocaust” for the insects living there, though that never crosses the human’s mind.
Harris thinks that AGIs will, as a necessary condition of possessing “general intelligence,” be autonomous, goal-forming, and able to modify their own code (I think this is a questionable assumption), though he also says sentience and consciousness won’t necessarily arise as well. However, the latter doesn’t imply that such an AGI would be incapable of harm: Bacteria and viruses lack sentience, consciousness and self-awareness, but they can be very deadly to other organisms. Andreessen’s dismissal of AI existential risk is “superstitious hand-waving” that doesn’t engage with the real point.
Andreessen disagrees with Harris’ scenario about a superintelligent AGI accidentally killing humans because it is unaligned with our interests. He says an AGI that smart would (without explaining why) also be smart enough question the goal that humans have given it, and as a result not carry out subgoals that kill humans. Intelligence is therefore its own antidote to the alignment problem: A superintelligent AGI would be able to foresee the consequences of its subgoals before finalizing them, and it would thus understand that subgoals resulting in human deaths would always be counterproductive to the ultimate goal, so it would always pick subgoals that spared us. Once a machine reaches a certain level of intelligence, alignment with humans becomes automatic.
I think Andreessen makes a fair point, though it’s not strong enough to convince me that it’s impossible to have a mishap where a non-aligned AGI kills huge numbers of people. Also, there are degrees of alignment with human interests, meaning there are many routes through a decision tree of subgoals that an AGI could take to reach an ultimate goal we tasked it with. An AGI might not choose subgoals that killed humans, but it could still choose different subgoals that hurt us in other ways. The pursuit of its ultimate goal could therefore still backfire against us unexpectedly and massively. One could envision a scenario where and AGI achieves the goal, but at an unacceptable cost to human interests beyond merely not dying.
I also think that Harris and Andreessen make equally plausible assumptions about how an AGI would choose its subgoals. It IS weird that Harris envisions a machine that is so smart it can accomplish anything, yet also so dumb that it can’t see how one of its subgoals would destroy humankind. At the same time, Andreessen’s belief that a machine that smart would, by default, not be able to make mistakes that killed us is not strong enough.
Harris explores Andreessen’s point that AIs won’t go through the crucible of natural evolution, so they will lack the aggressive and self-preserving instincts that we and other animals have developed. The lack of those instincts will render the AIs incapable of hostility. Harris points out that evolution is a dumb, blind process that only sets gross goals for individuals–the primary one being to have children–and humans do things antithetical to their evolutionary programming all the time, like deciding not to reproduce. We are therefore proof of concept that intelligent machines can find ways to ignore their programming, or at least to behave in very unexpected ways while not explicitly violating their programming. Just as we can outsmart evolution, AGIs will be able to outsmart us with regards to whatever safeguards we program them with, especially if they can alter their own programming or build other AGIs as they wish.
Andreessen says that AGIs will be made through intelligent design, which is fundamentally different from the process of evolution that has shaped the human mind and behavior. Our aggression and competitiveness will therefore not be present in AGIs, which will protect us from harm. Harris says the process by which AGI minds are shaped is irrelevant, and that what is relevant is their much higher intelligence and competence compared to humans, which will make them a major threat.
I think the debate over whether impulses or goals to destroy humans will spontaneously arise in AGIs is almost moot. Both of them don’t consider that a human could deliberately create an AGI that had some constellation of traits (e.g. – aggression, self-preservation, irrational hatred of humans) that would lead it to attack us, or that was explicitly programmed with the goal of destroying our species. It might sound strange, but I think rogue humans will inevitably do such things if the AGIs don’t do it to themselves. I plan to flesh out the reasons and the possible scenarios in a future blog essay.
Andreessen doesn’t have a good comeback to Harris’ last point, so he dodges it by switching to talking about GPT-4. It is–surprisingly–capable of high levels of moral reasoning. He has had fascinating conversations with it about such topics. Andreessen says GPT-4’s ability to engage in complex conversations that include morality demystifies AI’s intentions since if you want to know what an AI is planning to do or would do in a given situation, you can just ask it.
Harris responds that it isn’t useful to explore GPT-4’s ideas and intentions because it isn’t nearly as smart as the AGIs we’ll have to worry about in the future. If GPT-4 says today that it doesn’t want to conquer humanity because it would be morally wrong, that tells us nothing about how a future machine will think about the same issue. Additionally, future AIs will be able to convincingly lie to us, and will be fundamentally unpredictable due to their more expansive cognitive horizons compared to ours. I think Harris has the stronger argument.
Andreessen points out that our own society proves that intelligence doesn’t perfectly correlate with power–the people who are in charge are not also the smartest people in the world. Harris acknowledges that is true, and that it is because humans don’t select leaders strictly based on their intelligence or academic credentials–traits like youth, beauty, strength, and creativity are also determinants of status. However, all things being equal, the advantage always goes to the smarter of two humans. Again, Andreessen doesn’t have a good response.
Andreessen now makes the first really good counterpoint in awhile by raising the “thermodynamic objection” to AI doomsday scenarios: an AI that turns hostile would be easy to destroy since the vast majority of the infrastructure (e.g. – power, telecommunications, computing, manufacturing, military) would still be under human control. We could destroy the hostile machine’s server or deliver an EMP blast to the part of the world where it was localized. This isn’t an exotic idea: Today’s dictators commonly turn off the internet throughout their whole countries whenever there is unrest, which helps to quell it.
Harris says that that will become practically impossible far enough in the future since AIs will be integrated into every facet of life. Destroying a rogue AI in the future might require us to turn off the whole global internet or to shut down a stock market, which would be too disruptive for people to allow. The shutdowns by themselves would cause human deaths, for instance among sick people who were dependent on hospital life support machines.
This is where Harris makes some questionable assumptions. If faced with the annihilation of humanity, the government would take all necessary measures to defeat a hostile AGI, even if it resulted in mass inconvenience or even some human deaths. Also, Harris doesn’t consider that the future AIs that are present in every realm of life might be securely compartmentalized from each other, so if one turns against us, it can’t automatically “take over” all the others or persuade them to join it. Imagine a scenario where a stock trading AGI decides to kill us. While it’s able to spread throughout the financial world’s computers and to crash the markets, it’s unable to hack into the systems that control the farm robots or personal therapist AIs, so there’s no effect on our food supplies or on our mental health access. Localizing and destroying the hostile AGI would be expensive and damaging, but it wouldn’t mean the destruction of every computer server and robot in the world.
Andreessen says that not every type of AI will have the same type of mental architecture. LLMs, which are now the most advanced type of AI, have highly specific architectures that bring unique advantages and limitations. Its mind works very differently from AIs that drive cars. For that reason, speculative discussions about how future AIs will behave can only be credible if they incorporate technical details about how those machines’ minds operate. (This is probably the point where Harris is out of his depth.) Moreover, today’s AI risk movement has its roots in Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies. Ironically, the book did not mention LLMs as an avenue to AI, which shows how unpredictable the field is. It was also a huge surprise that LLMs proved capable of intellectual discussions and of automating white-collar jobs, while blue-collar jobs still defy automation. This is the opposite of what people had long predicted would happen. (I agree that AI technology has been unfolding unpredictably, and we should expect many more surprises in the future that deviate from our expectations, which have been heavily influenced by science fiction.) The reason LLMs work so well is because we loaded them with the sum total of human knowledge and expression. “It is us.”
Harris points out that Andreessen shouldn’t revel in that fact since it also means that LLMs contain all of the negative emotions and bad traits of the human race, including those that evolution equipped us with, like aggression, competition, self-preservation, and a drive to make copies of ourselves. This militates against Andreessen’s earlier claim that AIs will be benign since their minds will not have been the products of natural evolution likes ours are. And there are other similarities: Like us, LLMs can hallucinate and make up false answers to questions, as humans do. For a time, GPT-4 also gave disturbing and insulting answers to questions from human users, which is a characteristically human way of interaction.
Andreessen implies Harris’ opinions of LLMs are less credible because Andreessen has a superior technical understanding of how they work. GPT-4’s answers might occasionally be disturbing and insulting, but it has no concept of what its own words mean, and it’s merely following its programming by trying to generate the best answer to a question asked by a human. There was something about how the humans worded their questions that triggered GPT-4 to respond in disturbing and insulting ways. The machine is merely trying to match inputs with the right outputs. In spite of its words, it’s “mind” is not disturbed or hostile because it lacks a mind. LLMs are “ultra-sophisticated Autocomplete.”
Harris agrees with Andreessen about the limitations of LLMs, agrees they lack general intelligence right now, and is unsure if they are fundamentally capable of possessing it. Harris moves on to speculating about what an AGI would be like, agnostic about whether it is LLM-based. Again, he asks Andreessen how humans would be able to control machines that are much smarter than we are forever. Surely, one of them would become unaligned at some point, with disastrous consequences.
Andreessen again raises the thermodynamic objection to that doom scenario: We’d be able to destroy a hostile AGI’s server(s) or shut off its power, and it wouldn’t be able to get weapons or replacement chips and parts because humans would control all of the manufacturing and distribution infrastructure. Harris doesn’t have a good response.
Thinking hard about a scenario where an AGI turned against us, I think it’s likely we’ll have other AGIs who stay loyal to us and help us fight the bad AGI. Our expectation that there will be one, evil, all-powerful machine on one side (that is also remote controlling an army of robot soldiers) and a purely human, united force on the other is an overly simplistic one that is driven by sci-fi movies about the topic.
Harris raises the possibility that hostile AIs will be able to persuade humans to do bad things for them. Being much smarter, they will be able to trick us into doing anything. Andreessen says there’s no reason to think that will happen because we can already observe it doesn’t happen: smart humans routinely fail to get dumb humans to change their behavior or opinions. This happens at individual, group, national, and global levels. In fact, dumb people will often resentfully react to such attempts at persuasion by deliberately doing the opposite of what the smart people recommend.
Harris says Andreessen underestimates the extent to which smart humans influence the behavior and opinions of dumb humans because Andreessen only considers examples where the smart people succeed in swaying dumb people in prosocial ways. Smart people have figured out how to change dumb people for the worse in many ways, like getting them addicted to social media. Andreessen doesn’t have a good response. Harris also raises the point that AIs will be much smarter than even the smartest humans, so the former will be better at finding ways to influence dumb people. Any failure of modern smart humans to do it today doesn’t speak to what will be possible for machines in the future.
I think Harris won this round, which builds on my new belief that the first human-AI war won’t be fought by purely humans on one side and purely machines on the other. A human might, for any number of reasons, deliberately alter an AI’s program to turn it against our species. The resulting hostile AI would then find some humans to help it fight the rest of the human race. Some would willingly join its side (perhaps in the hopes of gaining money or power in the new world order) and some would be tricked by the AI into unwittingly helping it. Imagine it disguising itself as a human medical researcher and paying ten different people who didn’t know each other to build the ten components of a biological weapon. The machine would only communicate with them through the internet, and they’d mail their components to a PO box. The vast majority of humans would, with the help of AIs who stayed loyal to us or who couldn’t be hacked and controlled by the hostile AI, be able to effectively fight back against the hostile AI and its human minions. The hostile AI would think up ingenious attack strategies against us, and our friendly AIs would think up equally ingenious defense strategies.
Andreessen says it’s his observation that intelligence and power-seeking don’t correlate; the smartest people are also not the most ambitious politicians and CEOs. If that’s any indication, we shouldn’t assume superintelligent AIs will be bent on acquiring power through methods like influencing dumb humans to help it.
Harris responds with the example of Bertrand Russell, who was an extremely smart human and a pacifist. However, during the postwar period when only the U.S. had the atom bomb, he said America should threaten the USSR with a nuclear first strike in response to its abusive behavior in Europe. This shows how high intelligence can lead to aggression that seems unpredictable and out of character to dumber beings. A superintelligent AI that has always been kind to us might likewise suddenly turn against us for reasons we can’t foresee. This will be especially true if the AIs are able to edit their own codes so they can rapidly evolve without us being able to keep track of how they’re changing. Harris says Andreessen doesn’t seem to be thinking about this possibility. The latter has no good answer.
Harris says Andreessen’s thinking about the matter is hobbled by the latter’s failure to consider what traits general intelligence would grant an AI, particularly unpredictability as its cognitive horizon exceeded ours. Andreessen says that’s an unscientific argument because it is not falsifiable. Anyone can make up any scenario where an unknown bad thing happens in the future.
Harris responds that Andreessen’s faith that AGI will fail to become threatening due to various limitations is also unscientific. The “science,” by which he means what is consistently observed in nature, says the opposite outcome is likely: We see that intelligence grants advantages, and can make a smarter species unpredictable and dangerous to a dumber species it interacts with. [Recall Harris’ insect holocaust example.]
Consider the relationship between humans and their pets. Pets enjoy the benefits of having their human owners spend resources on them, but they don’t understand why we do it, or how every instance of resource expenditure helps them. [Trips to the veterinarian are a great example of this. The trips are confusing, scary, and sometimes painful for pets, but they help cure their health problems.] Conversely, if it became known that our pets were carrying a highly lethal virus that could be transmitted to humans, we would promptly kill almost all of them, and the pets would have no clue why we turned against them. We would do this even if our pets had somehow been the progenitors of the human race, as we will be the progenitors of AIs. The intelligence gap means that our pets have no idea what we are thinking about most of the time, so they can’t predict most of our actions.
Andreessen dodges by putting forth a weak argument that the opposite just happened, with dumb people disregarding the advice of smart people when creating COVID-19 health policies, and he again raises the thermodynamic objection. His experience as an engineer gives him insights into how many practical roadblocks there would be to a superintelligent AGI destroying the human race in the future that Harris, as a person with no technical training, lacks. A hostile AGI would be hamstrung by human control [or “human + friendly AI control”] of crucial resources like computer chips and electricity supplies.
Andreessen says that Harris’ assumptions about how smart, powerful and competent an AGI would be might be unfounded. It might vastly exceed us in those domains, but not reach the unbeatable levels Harris foresees. How can Harris know? Andreessen says Harris’ ideas remind him of a religious person’s, which is ironic since Harris is a well-known atheist.
I think Andreessen makes a fair point. The first (and second, third, fourth…) hostile AGI we are faced with might attack us on the basis of flawed calculations about its odds of success and lose. There could also be a scenario where a hostile AGI attacks us prematurely because we force its hand somehow, and it ends up losing. That actually happened to Skynet in the Terminator films.
Harris says his prediction about when the first AGI is created does not take time into account. He doesn’t know how many years it will take. Rather, he is focused on the inevitability of it happening, and what its effects on us will be. He says Andreessen is wrong to assume that machines will never turn against us. Doing thought experiments, he concludes alignment is impossible in the long-run.
Andreessen moves on to discussing how even the best LLMs often give wrong answers to questions. He explains why the exactitudes of how the human’s question is worded, along with randomness in how the machine goes through its own training data to generate an answer, leads to varying and sometimes wrong answers. When they’re wrong, the LLMs happily accept corrections from humans, which he finds remarkable and proof of a lack of ego and hostility.
Harris responds that future AIs will, by virtue of being generally intelligent, think in completely different ways than today’s LLMs, so observations about how today’s GPT-4 is benign and can’t correctly answer some types of simple questions says nothing about what future AGIs will be like. Andreessen doesn’t have a response.
I think Harris has the stronger set of arguments on this issue. There’s no reason we should assume that an AGI can’t turn against us in the future. In fact, we should expect a damaging, though not fatal, conflict with an AGI before the end of this century.
Harris switches to talking about the shorter-term threats posed by AI technology that Andreessen described in his essay. AI will lower the bar to waging war since we’ll literally have “less skin in the game” because robots will replace human soldiers. However, he doesn’t understand why that would also make war “safer” as Andreessen claimed it would.
Andreessen says it’s because military machines won’t be affected by fatigue, stress or emotions, so they’ll be able to make better combat decisions than human soldiers, meaning fewer accidents and civilian deaths. The technology will also assist high-level military decision making, reducing mistakes at the top. Andreessen also believes that the trend is for military technology to empower defenders over attackers, and points to the highly effective use of shoulder-launched missiles in Ukraine against Russian tanks. This trend will continue, and will reduce war-related damage since countries will be deterred from attacking each other.
I’m not convinced Andreessen is right on those points. Emotionless fighting machines that always obey their orders to the letter could also, at the flick of a switch, carry out orders to commit war crimes like mass exterminations of enemy human populations. A bomber that dropped a load 100,000 mini smart bombs that could coordinate with each other and home in on highly specific targets could kill as many people as a nuclear bomb. So it’s unclear what effect replacing humans with machines on the battlefield will have on human casualties in the long run. Also, Andreessen only cites one example to support his claim that technology has been favoring the defense over the offense. It’s not enough. Even assuming that a pro-defense trend exists, why should we expect it to continue that way?
Harris asks Andreessen about the problem of humans using AI to help them commit crimes. For one, does Andreessen think the government should ban LLMs that can walk people through the process of weaponizing smallpox? Yes, he’s against bad people using technology, like AI, to do bad things like that. He thinks pairing AI and biological weapons poses the worst risk to humans. While the information and equipment to weaponize smallpox are already accessible to nonstate actors, AI will lower the bar even more.
Andreessen says we should use existing law enforcement and military assets to track down people who are trying to do dangerous things like create biological weapons, and the approach shouldn’t change if wrongdoers happen to start using AI to make their work easier. Harris asks how intrusive the tracking should be to preempt such crimes. Should OpenAI have to report people who merely ask it how to weaponize smallpox, even if there’s no evidence they acted on the advice? Andreessen says this has major free speech and civil liberties implications, and there’s no correct answer. Personally, he prefers the American approach, in which no crime is considered to have occurred until the person takes the first step to physically building a smallpox weapon. All the earlier preparation they did (gathering information and talking/thinking about doing the crime) is not criminalized.
Andreessen reminds Harris that the same AI that generates ways to commit evil acts could also be used to generate ways to mitigate them. Again, it will empower defenders as well as attackers, so the Good Guys will also benefit from AI. He thinks we should have a “permanent Operation Warp Speed” where governments use AI to help create vaccines for diseases that don’t exist yet.
Harris asks about the asymmetry that gives a natural advantage to the attacker, meaning the Bad Guys will be able to do disproportionate damage before being stopped. Suicide bombers are an example. Andreessen disagrees and says that we could stop suicide bombers by having bomb-sniffing dogs and scanners in all public places. Technology could solve the problem.
I think that is a bad example, and it actually strengthens Harris’ claim about there being a natural asymmetry. One, deranged person who wants to blow himself up in a public place only needs a few hundred dollars to make a backpack bomb, the economic damage from a successful attack would be in the millions of dollars, and emplacing machines and dogs in every public place to stop suicide bombers like him early would cost billions of dollars. Harris is right that the law of entropy makes it easier to make a mess than to clean one up.
This leads me to flesh out my vision of a human-machine war more. As I wrote previously, 1) the two sides will not be purely humans or purely machines and 2) the human side will probably have an insurmountable advantage thanks to Andreessen’s thermodynamic objection (most resources, infrastructure, AIs, and robots will remain under human control). I now also believe that 3) a hostile AGI will nonetheless be able to cause major damage before it is defeated or driven into the figurative wilderness. Something on the scale of 9/11, a major natural disaster, or the COVID-19 pandemic is what I imagine.
Harris says Andreessen underestimates the odds of mass technological unemployment in his essay. Harris describes a scenario where automation raises the standard of living for everyone, as Andreessen believes will happen, but for the richest humans by a much greater magnitude than everyone else, and where wealth inequality sharply increases because rich capitalists own all the machines. This state of affairs would probably lead to political upheaval and popular revolt.
Andreessen responds that Karl Marx predicted the same thing long ago, but was wrong. Harris responds that this time could be different because AIs would be able to replace human intelligence, which would leave us nowhere to go on the job skills ladder. If machines can do physical labor AND mental labor better than humans, then what is left for us to do?
I agree with Harris’ point. While it’s true that every past scare about technology rendering human workers obsolete has failed, that trend isn’t sure to continue forever. The existence of chronically unemployed people right now gives insights into how ALL humans could someday be out of work. Imagine you’re a frail, slow, 90-year-old who is confined to a wheelchair and has dementia. Even if you really wanted a job, you wouldn’t be able to find one in a market economy since younger, healthier people can perform physical AND mental labor better and faster than you. By the end of this century, I believe machines will hold physical and mental advantages over most humans that are of the same magnitude of difference. In that future, what jobs would it make sense for us to do? Yes, new types of jobs will be created as older jobs are automated, but, at a certain point, wouldn’t machines be able to retrain for the new jobs faster than humans and to also do them better than humans?
Andreessen returns to Harris’ earlier claim about AI increasing wealth inequality, which would translate into disparities in standards of living that would make the masses so jealous and mad that they would revolt. He says it’s unlikely since, as we can see today, having a billion dollars does not grant access to things that make one’s life 10,000 times better than someone who only has $100,000. For example, Elon Musk’s smartphone is not better than a smartphone owned by an average person. Technology is a democratizing force because it always makes sense for the rich and smart people who make or discover it first to sell it to everyone else. The same is happening with AI now. The richest person can’t pay any amount of money to get access to something better than GPT-4, which is accessible for a fee that ordinary people can pay.
I agree with Andreessen’s point. A solid body of scientific data show that money’s effect on wellbeing is subject to the law of diminishing returns: If you have no job and make $0 per year, getting a job that pays $20,000 per year massively improves your life. However, going from a $100,000 salary to $120,000 isn’t felt nearly as much. And a billionaire doesn’t notice when his net worth increases by $20,000 at all. This relationship will hold true even in the distant future when people can get access to advanced technologies like AGI, space ships and life extension treatments.
Speaking of the latter, Andreessen’s point about technology being a democratizing force is also something I noted in my review of Elysium. Contrary to the film’s depiction, it wouldn’t make sense for rich people to horde life extension technology for themselves. At least one of them would defect from the group and sell it to the poor people on Earth so he could get even richer.
Harris asks whether Andreessen sees any potential for a sharp increase in wealth inequality in the U.S. over the next 10-20 years thanks to the rise of AI and the tribal motivations of our politicians and people. Andreessen says that government red tape and unions will prevent most humans from losing their jobs. AI will destroy categories of jobs that are non-government, non-unionized, and lack strong political backing, but everyone will still benefit from the lower prices for the goods and services. AI will make everything 10x to 100x cheaper, which will boost standards of living even if incomes stay flat.
Here and in his essay, Andreessen convinces me that mass technological unemployment and existential AI threats are farther in the future than I had assumed, but not that they can’t happen. Also, even if goods get 100x cheaper thanks to machines doing all the work, where would a human get even $1 to buy anything if he doesn’t have a job? The only possible answer is government-mandated wealth transfers from machines and the human capitalists that own them. In that scenario, the vast majority of the human race would be economic parasites that consumed resources while generating nothing of at least equal value in return, and some AGI or powerful human will inevitably conclude that the world would be better off if we were deleted from the equation. Also, what happens once AIs and robots gain the right to buy and own things, and get so numerous that they can replace humans as a customer base?
I agree with Andreessen that the U.S. should allow continued AI development, but shouldn’t let a few big tech companies lock in their power by persuading Washington to enact “AI safety laws” that give them regulatory capture. In fact, I agree with all his closing recommendations in the “What Is To Be Done?” section of his essay.
This debate between Harris and Andreessen was enlightening for me, even though Andreessen dodged some of his opponent’s questions. It was interesting to see how their different perspectives on the issue of AI safety were shaped by their different professional backgrounds. Andreessen is less threatened by AIs because he, as an engineer, has a better understanding of how LLMs work and how many technical problems an AI bent on destroying humans would face in the real world. Harris feels more threatened because he, as a philosopher, lives in a world of thought experiments and abstract logical deductions that lead to the inevitable supremacy of AIs over humans.
Links:
The first half of the podcast (you have to be a subscriber to hear all two hours of it.) https://youtu.be/QMnH6KYNuWg
A website Andreessen mentioned that backs his claim that technological innovation has slowed down more than people realize. https://wtfhappenedin1971.com/
A day after the Ukrainian counteroffensive started, the Kakhovka dam blew up, sending a surge of water down the Dnieper River. Though both sides blamed the other for the act, the dam was inside Russian-controlled territory, and its destruction helped Russia since it prevented Ukrainian forces from making amphibious crossings downriver. https://youtu.be/MNsTa90FjiA
Russia’s (highly probable) destruction of the dam has caused all the irrigation canals running into Crimea to go dry. An act meant to hobble the Ukrainian counteroffensive will have long-lasting consequences for the people living in the parts of Ukraine Russia annexed. https://www.bbc.com/news/world-europe-65963403
A brutal, first-person video of Ukrainian special forces troops shooting Russian troops dead in a Russian trench has surfaced. https://youtu.be/yRL3Nlu9uts
Wagner troops occupied the Russian city of Rostov-on-Don and seized control of a military headquarters building that was supporting war efforts in Ukraine. Video evidence shows average Russians were friendly to the Wagner troops and cheered for them as they left. https://www.politico.com/news/magazine/2023/06/25/mutiny-bodes-ill-for-putin-00103571
The coup attempt ended when Prigozhin accepted exile in Belarus along with some of his men in exchange for legal immunity for Wagner’s actions. https://youtu.be/lLLNA4fcLGE
‘The United States military released video Monday of what it called an “unsafe” Chinese maneuver in the Taiwan Strait on the weekend, in which a Chinese navy ship cut sharply across the path of an American destroyer, forcing the U.S. vessel to slow to avoid a collision.’ https://apnews.com/article/us-china-taiwan-strait-489a45bb6df134fa09443d285b3f8669
In WWII, Japanese troops used “lunge mines,” which were pressure-sensitive bombs attached to long poles. A soldier would use one to “spear” and enemy tank, and the collision between the mine and the tank’s surface would set off the explosives. It was usually fatal to the user. https://youtu.be/rBnRhP41nmg
‘A national redoubt or national fortress is an area to which the (remnant) military forces of a nation can be withdrawn if the main battle has been lost or even earlier if defeat is considered inevitable. Typically, a region is chosen with a geography favouring defence, such as a mountainous area or a peninsula, to function as a final holdout to preserve national independence and host an effective resistance movement for the duration of the conflict.’ https://en.wikipedia.org/wiki/National_redoubt
Ten years after Google Glass, Apple has announced it is making its own augmented reality goggles. https://youtu.be/TX9qSaGXFyg
This analyst thinks Apple probably won’t sell many Vision Pro units due to its high price and limited capabilities. However, it will lay the groundwork for future generations of the goggles, which will cheaper, better, and more widely used. https://finance.yahoo.com/news/apple-vision-pro-technical-marvel-021046894.html
‘AlphaDev uncovered faster algorithms by starting from scratch rather than refining existing algorithms, and began looking where most humans don’t: the computer’s assembly instructions.
Assembly instructions are used to create binary code for computers to put into action. While developers write in coding languages like C++, known as high-level languages, this must be translated into ‘low-level’ assembly instructions for computers to understand.
We believe many improvements exist at this lower level that may be difficult to discover in a higher-level coding language. Computer storage and operations are more flexible at this level, which means there are significantly more potential improvements that could have a larger impact on speed and energy usage.’ https://www.deepmind.com/blog/alphadev-discovers-faster-sorting-algorithms
Bing Chat can solve CAPTCHAs even if it doesn’t know it.
As the internet fills with computer-generated content (images, news articles, stories, sounds and music), we run the risk of creating corrupted data training sets for future AIs. The errors could compound themselves as AIs trained on flawed data make new content that is even more flawed, which newer AIs would use as THEIR training data, and so on. https://spectrum.ieee.org/ai-collapse
Mark Zuckerberg has no idea when AGI will be invented. He thinks LLMs might be a paradigm whose performance tops out before reaching general intelligence. https://youtu.be/YkSXY4pBAEk
Twitter founder Jack Dorsey thinks the near-term potential and threat of AI is being overblown by tech companies because the publicity boosts their stock valuations. The media has gone along with it because doomsday stories boost their ratings. https://youtu.be/WS7xmb3UhCU
The infamous terrorist and anti-technology advocate Ted Kaczynski killed himself in prison. The core claim in his Manifesto is that technology had created living conditions and lifestyles that were antithetical to human nature, and that the trend would culminate with the creation of A.I., which would either exterminate us or create an intensely miserable world that wouldn’t be worth living in. He advocated forsaking everything but pre-Industrial Age technology so we could live as nature intended for us. https://www.cnn.com/2023/06/10/us/ted-kaczynski-unabomber-dead/index.html
‘”By 2030, we think we’re going to have four million tonnes [of worn-out scrap solar panels] – which is still manageable – but by 2050, we could end up with more than 200 million tonnes globally.” To put that into perspective, the world currently produces a total of 400 million tonnes of plastic every year.’ https://www.bbc.com/news/science-environment-65602519
Decades worth of research on photosynthesis, which could have led to improvements in solar panel technology, were destroyed when a janitor unplugged a freezer in a university research lab. All of the specimens thawed out and were lost. https://www.bbc.com/news/world-us-canada-66028401
A former U.S. intelligence officer publicly claims the U.S. government has been running a secret UFO program for decades. Crashed alien spacecraft and dead alien pilots are allegedly in U.S. possession, and our engineers have been trying to reverse engineer them. While he hasn’t seen any of the spacecraft or aliens, or even seen photos of them, he claims to know people who have and that he has written documents from the secret program. He’s getting the truth out by filing a whistleblower complaint with the Pentagon inspector general, in which he alleges that keeping the program secret from Congress violates the law. Congress is supposed to know about even the most classified military projects. https://www.newsnationnow.com/space/military-whistleblowe-us-ufo-retrieval-program/ https://www.dailymail.co.uk/news/article-12189773/Pentagon-whistleblower-says-Vatican-aware-existence-non-human-intelligences.html
‘In all, five rats received a vitrified-then-thawed kidney in a study whose results were published this month in Nature Communications. It’s the first time scientists have shown it’s possible to successfully and repeatedly transplant a life-sustaining mammalian organ after it has been rewarmed from this icy metabolic arrest. Outside experts unequivocally called the results a seminal milestone for the field of organ preservation.’ https://www.statnews.com/2023/06/21/cryogenic-organ-preservation-transplants/
Scientists used genetic engineering to turn unfertilized mouse eggs into viable mouse embryos, in a process called “parthenogenesis.” One of the resulting offspring survived until adulthood and had natural children of its own. In the far future, this technique will be used to create humans and posthumans. https://www.pnas.org/doi/full/10.1073/pnas.2115248119
‘Henneguya salminicola is the only known multicellular animal that does not rely on the aerobic respiration of oxygen, relying instead on an exclusively anaerobic metabolism.[8][7] It lacks a mitochondrial genome and therefore mitochondria, making it one of the only known members of the eukaryotic animal kingdom to shun oxygen as the foundation of its metabolism.’ https://en.wikipedia.org/wiki/Henneguya_zschokkei
‘A new study published in Lancet estimates that 101 million people in India – 11.4% of the country’s population – are living with diabetes. A survey commissioned by the health ministry also found that 136 million people – or 15.3% of the people – could be living with pre-diabetes. ‘ https://www.bbc.com/news/world-asia-india-65852551
For anyone who believes Russia’s propaganda that the war is going according to Putin’s elaborate master plan: ‘[The head of Russia’s “Wagner” private army] posted a gruesome video of him walking among dead fighters’ bodies [in Bakhmut], asking defence officials for more supplies…”Shoigu! Gerasimov! Where is the… ammunition?… They came here as volunteers and die for you to fatten yourselves in your mahogany offices.”‘ https://www.bbc.com/news/world-europe-65493008
A Russian soldier surrendered to a flying drone in Bakhmut. It dropped a written note to him instructing him to walk towards the Ukrainian lines, and as he did, his comrades tried to kill him. https://youtu.be/yE2sKbEjsRY
Two small drones were used in a suicide attack on the Kremlin in the middle of the night, causing no real damage. The perpetrators haven’t been found, but they were likely Ukrainian agents who carried out the attack for its symbolic rather than military value. It also may have been an inside job perpetrated by some faction of Russia’s security apparatus. https://youtu.be/2Oiagfj_Mik
A group of militants claiming to be Russian expatriates against Putin crossed from Ukraine into Russia’s Belgorod region and did damage to infrastructure and several structures. Thousands of Russian civilians had to evacuate the area. While the incursion had insignificant military value, it left many Russians shaken by demonstrating how depleted their border defenses had become thanks to the manpower drain of the Ukraine invasion. https://www.nbcnews.com/news/world/belgorod-raid-exposes-russia-defenses-ukraine-prigozhin-putin-military-rcna85945
Russia is using their antique T-54 tanks in Ukraine in ways mindful of their combat limitations. https://youtu.be/ObF_cSe_6UM
In a Ukrainian weapons depot, there are still unopened crates full of WWII Tommy Guns that the U.S. gave the USSR in WWII. Instead of putting them into service, it would make the most sense to sell them to international gun collectors and to use the proceeds to buy newly made guns of different types. https://youtu.be/ApFT-pLcAXQ
I think the Ukraine War will end like this, in an echo of the Korean War: ‘It’s a scenario that may prove the most realistic long-term outcome given that neither Kyiv nor Moscow appear inclined to ever admit defeat. It’s also becoming increasingly likely amid the growing sense within the administration that an upcoming Ukrainian counteroffensive won’t deal a mortal blow to Russia. A frozen conflict — in which fighting pauses but neither side is declared the victor nor do they agree that the war is officially over — also could be a politically palatable long-term result for the United States and other countries backing Ukraine.’ https://www.politico.com/news/2023/05/18/ukraine-russia-south-korea-00097563
Ukraine has terrible demographics. While it will probably survive the current Russian invasion with most of its territory, its overall and working-age populations will be 15-20% smaller in 2040 than they were in 2021, undermining its ability to defend itself from future invasions. A long-term Russian effort to chip away at Ukraine and to absorb it will succeed if Russia is willing to bear the high price and if the West’s support for Ukraine flags. https://en.wikipedia.org/wiki/Demographics_of_Ukraine https://onlinelibrary.wiley.com/doi/full/10.1002/psp.2656#
In spite of the Russian military’s heavy losses and painful mistakes in Ukraine, it would be a mistake to write it off as an incompetent force on its last legs. This report shows the Russians have adapted in many ways to the nature of the fighting, and still hold large advantages over the Ukrainians. https://static.rusi.org/403-SR-Russian-Tactics-web-final.pdf
During WWII, the U.S. Army Surgeon General found that, in Italy, his troops usually became mentally unfit to serve after spending 200-240 cumulative days in combat. https://youtu.be/1sC3tCXrbwQ
For all of its faults, the M4 Sherman tank was the best in its class when it came to easy crew egress. This is a critical feature when a tank is disabled and burning and the crewmen have to get out immediately. The men represent investments of money that might exceed the value of their own tank, so saving their lives when possible makes sense from a national resource efficiency perspective. https://youtu.be/q6xvg5iJ4Zk
Key points from a long interview with Henry Kissinger:
The U.S. and China are on the path to confrontation, probably over Taiwan.
Trump was right to confront China about its unfair trade practices, but he should have stopped there and not made the relationship worse in any other ways.
The leaders of America and China should have a major meeting and make a joint declaration that neither wants war with the other. They should form a high-level joint committee to periodically meet to discuss all the countries’ problems with each other.
Most Chinese thinkers believe America is declining.
The Ukraine War will probably end with some Ukrainian territory still in Russian hands. However, both sides will still have strong enough armies to restart the war later to try getting what they want. As soon as this war stops, it would be a good idea for NATO to let Ukraine in as it would reduce the odds of either side attacking the other again.
Russia becoming a “vassal” of China is unlikely because the two have long running contempt for each other.
If Russia falls into chaos, then there will be a power vacuum in Central Asia, likely leading to civil wars and interventions by other Asian powers who are ethnically related to various Central Asian groups.
It’s actually not in the U.S. or global interest for Russia to suffer such a big defeat in Ukraine that it collapses.
Japan will have nuclear weapons within five years.
The Chinese have always been inward-looking and have never wanted to take over the world. They also have no interest in trying to Sinicize the cultures of other people. They just want to become to dominant power in East Asia, and to be respected (and possibly paid some kind of tribute by) their neighbors. This is fundamentally different from how the Europeans thought and acted during the Colonial Era.
If the U.S. defeats China in a war, China is likely to have its own civil war, which could have very bad external effects. It’s not in our interest to ever fight with them over anything.
AI will be as impactful as the printing press.
AI will make conventional military forces as destructive as nuclear weapons. Every person will be vulnerable to attack.
China’s approach to developing AI is about as reckless as America’s.
In spite of its serious cultural and political divisions, America is not doomed. It’s still possible for a leader or political movement to unify the country for something positive.
Geoffrey Hinton, the “Godfather of A.I.”, just quit his job at Google so he can be a public voice about the dangers posed by A.I. ‘His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”’ https://dnyuz.com/2023/05/01/the-godfather-of-a-i-leaves-google-and-warns-of-danger-ahead/
“The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches,” said Maarten Sap, a researcher and professor at Carnegie Mellon University. “They literally acknowledge in their paper’s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.” https://www.nytimes.com/2023/05/16/technology/microsoft-ai-human-reasoning.html
Altman and two fellow lead executives also released a statement about AI: “It’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.” https://openai.com/blog/governance-of-superintelligence
Elon Musk: “Over 20/30 year time frame I think things will be transformed beyond belief. Probably won’t recognize society in 30 years. [AGI] I think we’re only 3 years, maybe 6 years away… we are on the event horizon of the black hole that is ASI.” https://twitter.com/i/status/1661834925488881664
A large number of AI experts and technology executives signed this public statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://www.safe.ai/statement-on-ai-risk
‘The former Google CEO told The Wall Street Journal’s CEO Council: “My concern with AI is actually existential, and existential risk is defined as many, many, many, many people harmed or killed. And there are scenarios not today but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology.” Schmidt also said that governments needed to ensure the technology was not “misused by evil people.”‘ https://finance.yahoo.com/news/ex-google-ceo-eric-schmidt-111500864.html
There’s now a ChatGPT phone app, and it can communicate through speech instead of writing if you want. I predict the level of AI technology depicted in the first half of the film “Her” will exist by the end of this decade. https://www.wired.com/story/chatgpt-iphone-app/
The controversy over race-swapping actors in movies will disappear thanks to technology allowing viewers to customize which actors play which roles in the films they watch. Taken to its logical endpoint, each person will someday live in his own custom, virtual universe where they only see what they want to. The people who stand to lose out the most from this are those with especially strong inner drives to exercise power and dominance over other people through control of mainstream narratives and culture. It’s nothing more than an animal impulse, and is only a step removed from shouting down the other person during a debate so only your voice can be heard by other people. https://twitter.com/i/status/1659935325488021507
NVIDIA used the latest technology to create an immersive, first-person game with an NPC that can carry on non-scripted conversations with human players. https://youtu.be/5R8xZb6J3r0
A fake computer-generated image of thick smoke billowing from a building near the Pentagon caused stocks to drop within minutes of it appearing on social media. Though the image was quickly revealed to be fake and the stocks recovered, the incident shows how such computer-generated disinformation can affect the real world. https://dnyuz.com/2023/05/23/an-a-i-generated-spoof-rattles-the-markets/
Our failure to create an AI that reliably predict the results of chemical reactions underscores how poor quality our data are. The temptation to fudge results and to omit reporting unwanted results is very widespread among chemists. https://www.science.org/content/blog-post/give-me-those-hard-hard-numbers
These videos of “Robotis OP3” robots playing soccer with each other show how far machine dexterity has come, and how far it still has to go. https://youtu.be/WlIYa3lH5UI
Computers can scan fMRI brain scan data to determine what moving images people were seeing. I’m starting to think it will be possible someday to scan peoples’ brains to download their memories. We might even be able to implant them in other peoples’ brains. https://mind-video.com/
‘Boring Report is an app that aims to remove sensationalism from the news and makes it boring to read. In today’s world, catchy headlines and articles often distract readers from the actual facts and relevant information. By utilizing the power of advanced AI language models capable of generating human-like text, Boring Report processes exciting news articles and transforms them into the content that you see. This helps readers focus on the essential details and minimizes the impact of sensationalism.’ https://www.boringreport.org/app
NVIDIA’s market capitalization reached $1 trillion, making it the first semiconductor company to do so and only the ninth company of any kind to do so. It makes computer processors specialized for AI systems like the GPT series, so its profits have surged along with the popularity of those programs. https://finance.yahoo.com/video/nvidia-crosses-1-trillion-market-193710045.html
Here’s a fascinating video explaining the pros and cons of building motorcycles out of different types of metals. https://youtu.be/ah7Ubbq5EAA
I knew that smoke clouds could reflect sunlight back into space before it reached the ground, lowering ground-level temperatures. However, under some circumstances, the clouds can also RAISE ground temperatures by blocking ground heat from radiating into space. https://www.weather.gov/bgm/WeatherInActionSmokePlume
Across the world, there are dried-up lakes that we could refill by building pipelines connecting them to the oceans. The Dead Sea and Death Valley are examples. Since the dried up lakes are below sea level, gravity would move the water through the pipelines and no pumps would be needed. We could even put hydroelectric turbines in the pipelines to generate electricity from the flow.
Once refilled with water, the dead lakes could support life along their shores. Their filling would also slightly decrease sea levels, partly mitigating one effect of global warming.
The dead lakes are all barren deserts with almost no life, so flooding them would not cause any ecological damage. If anything, it would help the environment since plants and animals would have new places to live. https://unchartedterritories.tomaspueyo.com/p/seaflooding
Paradoxically, building more housing units in an expensive city like San Francisco might actually increase its average home prices. “My claim is that increasing density within a city shifts the demand curve for housing within that city, because of increasing desirability.” https://astralcodexten.substack.com/p/highlights-from-the-comments-on-housing
Dr. Garry Nolan claims that there has been a long-term alien presence on Earth, that they’re beyond our comprehension and interact with us using more primitive “intermediaries,” and that he knows people who have worked on reverse-engineering alien technology possessed by the U.S. government. It would be easy to dismiss him if he weren’t such an intelligent and extraordinarily credentialed person. https://youtu.be/e2DqdOw6Uy4 https://med.stanford.edu/profiles/garry-nolan
Transgendered and transsexual people clue us in to the attributes that posthumans will have. Like other organisms, humans have no natural control over their genetics or of the conditions they experience while developing in their mother’s womb. Those factors very heavily determine most of a person’s traits, including sex, gender, and anatomy. We accept the crapshoot of unchosen genes and prenatal influences since it is beyond any individual’s control has always been the basic reality of our existence, but technology will free our descendants from it and its severe limitations.
Posthumans will have the inbuilt ability to change their genes and biology to do things like become a different sex, become a different gender with the attendant changes in mental preferences, or to change many other aspects of themselves like intelligence level or height. That kind of adaptability will make posthumans more adaptable to a broader range of environments, will make their lives much more experientially rich than our own, and will let them understand one another in ways we can’t. For example, a person born male might be able to experience pregnancy. Individuals could also create offspring (perhaps clones of themselves) through self-fertilization, which would make them more survivable as a species than we are since just one individual could create a community of posthumans. Space colonization would also be easier for them as a result.
Instead of having XY or XX sex chromosomes, posthumans would all have XXY chromosomes, with one of the X’s or the Y chromosome inactive at any one time to make them male or female, respectively. It might be advantageous for some parts of their body to have different sex chromosome expressions than other parts.
If we create technology that can slow, halt, or reverse the aging process in humans, then it will inevitably be used to prolong the lives of animals. People already spend fortunes on their beloved pets, and some are already cloning their dead pets, so this is just a logical next step. Cryopreservation of dead pets will also happen, if it isn’t being done already.
This raises the possibility of weird scenarios, like 200-year-old dogs running around, and someone putting their dog into cryostasis due to a catastrophic vehicle injury and the slim hope the future surgeries will be able to fix it, and also making a clone of that dog to be a companion in the interim. Like Barbra Streisand bringing her two cloned dogs to the gravestone of the dead original, maybe our fictitious person will bring his clone to Alcor to stand next to one of the vats. Moreover, if mind uploading becomes possible and is a viable means of radical life extension, then some animals will inevitably have their minds uploaded. What would it be like to merge digital minds with a cat?
One explanation for Fermi’s Paradox is all aliens leave our universe for ones that are much better. Maybe in our universe, the Higgs Boson is not at its true vacuum state, meaning our universe could literally cease to exist at any moment (for all we know, the decay has already started somewhere and the shockwave will hit Earth tomorrow). Assume that, once an intelligent alien species reaches the level of science and technology we’ll reach in, say, 2200 AD, it discovers the truth about the Higgs Boson and also discovers how to travel to other universes that don’t have this problem and/or how to create universes that don’t have the problem. Intelligent species by definition make intelligent choices, so they all leave our universe. This happens long before any of them have had enough time to colonize more than a few light years of space.
This might also explain why we have not, to our knowledge, been visited by life forms from parallel universes.
The Sahara Desert is an enormous waste of space, is larger than it should be thanks to the actions of humans, and will probably be radically altered once AIs are in charge of the world. The Sahara was a savannah and had several mega lakes until a few thousand years ago, when humans started slowly desertifying it with animal grazing and, to a lesser extent, plant farming. Ending those practices around the edges of the desert along with ending most water diversions for human purposes would cause the desert to immediately start shrinking. Carefully planting trees and other plants at the edges of the desert would accelerate that soil and climate reclamation process further (various African countries are already trying to do this, but the effort is sputtering).
Building canals could also allow the extinct and nearly extinct mega lakes of the Sahara to be refilled with seawater from the Mediterranean and Indian Oceans, and freshwater from the rainy central part of the continent. Installing massive numbers of wind turbines and solar panel farms in the Sahara would also increase rainfall and lower ground temperatures through different mechanisms. It would also of course generate large amounts of electricity.
A milder climate and an advanced electricity infrastructure would make the Sahara much more suitable for machine and human habitation. Refilling some of the mega lakes with seawater would also slightly lower global sea levels, which would partly mitigate one aspect of global warming. Finally, the return of vegetation to the Sahara as it transformed back into a savannah would sequester large amounts of CO2, which would also combat global warming’s effects.
Having only one organ dedicated to key biofunctions was the “good enough” design solution natural selection picked, and was surely driven by the need to conserve bodily resources, but it also creates single points of failure that can kill the organism. A human has only one liver, one heart, one stomach, and a brain localized in one place. If we were to redesign ourselves as posthumans that were partly or fully organic, distributing key functions among multitudes of smaller organs would be wise. That said, the problem with having more than one heart is that their beats would need to be synchronized.
If we are trying to maximize utility and minimize harm to sentient life forms, and if we throw future technologies into the mix, we are led to some counterintuitive far future scenarios. For example, if we make it our goal to provide the happiest conditions to the largest number of people, then we end up removing all brains from our bodies and putting them in jars, incinerating the bodies, building The Matrix, and plugging all the brains into it. Since a person’s brain consumes 20% of their calories, dispensing with the rest of our bodies means we can support 5x as many “humans” for the same amount of energy.
And if we also choose the goal of minimizing animal suffering, we capture every member of every species that can experience suffering, remove their brains, and put them in The Matrix, too.
The optimal “future way of living” might be a totally industrialized Earth, devoid of wild, complex life forms, and nearly devoid of any natural spaces, with vast warehouses full of brains in jars with wires coming out of them. This sounds horrific, but it seems like the logical best choice.
Earth’s forests would all be cut down to make way for solar panels to power the Matrix’ simulated virtual forests, which would be much more beautiful than their real counterparts were.
‘Yet notwithstanding this promising mission, the Russian exiles also pose a major security risk to the countries where they are landing. The main reason is that the Kremlin has long exploited the Russian diaspora as part of its irregular warfare operations. Given the size and spread of the new Russian diaspora, there is no doubt that strategists in the FSB are taking the opportunity to plot nasty operations.’ https://www.politico.com/news/magazine/2023/04/04/russian-agents-war-refugees-00090192
In the Soviet T-55 tank, there are two metal canisters full of compressed air right behind the driver’s head. What could go wrong? https://youtu.be/sOX25jfEiO0
The B-2 diesel engine was invented in the USSR in the 1930s and became the standard for all its tanks. Its design was progressively improved over the decades, and the B-2 is still used in Russia’s new tanks. https://youtu.be/nyWAd1pQiwU
Here’s an interesting report on the “availability rates” of different U.S. Navy planes. If I have a fleet of ten fighter planes, and the availability rate is 80%, then at any given moment, eight of the planes are able to take to the air, but two of them can’t because they are broken and waiting to be fixed. Availability rates decline as planes get older and more worn out, and the F/A-18 Super Hornet has an anomalously poor rate. https://www.cbo.gov/publication/58937
The Germans had the best machine gun of WWII. After the war, they made several improvements and kept using it until 2012. https://youtu.be/A0cvxrAkbbE
A 1968 massacre of Vietnamese villagers by South Korean troops allied with the U.S. shows that any group of people can be a victim or oppressor. “The line separating good and evil passes not through states, nor between classes, nor between political parties either – but right through every human heart…” https://www.npr.org/2023/04/12/1167951366/south-korea-vietnam-war-massacre-court-case
America’s USS Gerald Ford aircraft carrier was launched 10 years ago but only now is finally entering regular service. The delay is mostly due to the ship being packed with new, unproven technologies that sailors had to slowly work the kinks out of. The expensive lessons learned from this will ensure that the next ship in the Ford class enters service much faster. https://www.businessinsider.com/navy-carrier-uss-gerald-ford-deploys-after-years-of-delays-2023-4
Though autonomous vehicles have fallen below the radar recently, and the slowdown in progress has some claiming the technology will never reach human levels, Bill Gates thinks they’re still improving, and will get much better and more common over the next 10 years. https://www.gatesnotes.com/Autonomous-Vehicles
Corporations are not “superintelligences,” as some people like to argue. Likewise, the team of humans that built AlphaGo could not defeat their machine in a game of Go.
AI-generated, lifelike images will lead to AI-generated, lifelike videos, which creates a new frontier for pornography, including child pornography. As distasteful as the subject is, it must be asked whether such footage should be criminalized if the children shown in it are fake and don’t resemble any real children. In such a case, who is being victimized? https://www.foxnews.com/world/canadian-man-sentenced-prison-ai-generated-child-pornography
The singer “Grimes” has invited fans to create computer-generated songs using digital reproductions of her voice, so long as they split the royalties with her 50/50 on any resulting songs that become popular. I have previously predicted that celebrities will start licensing their voices and likenesses in such ways. It will get more common with time. https://www.npr.org/2023/04/24/1171738670/grimes-ai-songs-voice
From 2011: ‘The report, commisioned by the New York State Energy Research and Development Authority, said the effects of sea level rise and changing weather patterns would be felt as early as the next decade. By the mid-2020s, sea level rise around Manhattan and Long Island could be up to 10in, assuming the rapid melting of polar ice sheets continues. ‘ https://www.theguardian.com/environment/2011/nov/16/climate-change-report-new-york-city
‘The biodiversity influence of avalanches comes from the natural corridors free of bushes and trees they create as they thunder down a mountainside. These become species-rich grasslands or meadows in themselves, but also, vitally, connect different habitats up and down the mountain. This can be crucial for species such as butterflies, who benefit from the cleared vegetation in the nutrient-poor soils of the Alps.’ https://www.bbc.com/future/article/20230405-how-avalanche-management-could-help-wildlife-in-the-alps
It’s not an accident that life on Earth only makes use of 20 types of amino acids, when a much larger number of acids with different molecular configurations could exist. https://pubs.acs.org/doi/full/10.1021/jacs.2c12987
There are shark repellents that are proven to work. One is a chemical mimicking the smell of dead sharks, and the other is a device that uses magnetism to scramble a shark’s sense of direction if it gets near. https://en.wikipedia.org/wiki/Shark_repellent
Here’s a fascinating educational video from the 1930s explaining how images were transmitted over phone lines. https://youtu.be/cLUD_NGE370
There’s a new twist in the scientific debate over whether and to what extent money can buy happiness. For all but the most naturally miserable people, more money DOES make them happier without any upper limit. However, the “happiness dividend” steadily shrinks. https://www.pnas.org/doi/10.1073/pnas.2208661120
A U.S. military drone filmed a spherical UFO during a surveillance mission over the Middle East. https://youtu.be/1fKhqnAtnx8
On the night of March 8, 1994, several people in the same part of Michigan saw glowing UFOs in the sky. The separately reported it to the authorities, and a local meteorologist in charge of the area’s weather radar station pointed the dish towards the objects, resulting in a bizarre radar image. He and all of the eyewitnesses are still adamant about what they saw, and the objects remain unidentified. https://youtu.be/QMQArS-s90I
A third, highly effective weight loss drug might be coming to the U.S. market soon. I’ve long thought that the obesity epidemic will only be ended with pharmaceuticals and, in the longer run, genetic engineering. We can’t count on most people to exercise more self-discipline to control their weight through diet and exercise. https://apnews.com/article/mounjaro-wegovy-ozempic-obesity-weight-loss-bd0e037cc5981513487260d40636752a
In as little as 50 years, profiles of dead users could outnumber the profiles of living users on Facebook. Maybe digital clones of dead people will outnumber living “original” people as well. https://time.com/5579737/facebook-dead-living/
“In the near future,” a man named “Drucker” (played by Tony Goldwyn) has become the world’s richest person by founding a biotech company that clones animals and human organs. The company has also invented a brain scanning device that can map the minds of recently deceased animals and then implant their memories and personality traits into the brains of newly created clones. One of Drucker’s businesses, called “Re-Pet,” pulls those technologies together as a walk-in retail chain where bereaved people bring in their dead cats and dogs and walk out with healthy clones of them. Cloning only takes two hours.
Using the same technology and facilities, Drucker also runs a secret and illegal human cloning operation. He makes human clones for friends and for powerful people who can’t cope with the deaths of loved ones, or who have a vested financial interest in not letting someone else die. For example, at the beginning of the movie, a star football player breaks his neck during a game and the team’s owner secretly pays Drucker to make a clone and dispose of the disabled, comatose player. The guy wakes up in the hospital not realizing he’s a clone, and the devastating on-field accident is explained to the public as miraculously not as bad as it looked on TV. The clone returns to his job and the team keeps winning.
Though Drucker’s illegal human cloning operation is only known to a handful of people, his legal cloning businesses have still made him a target for religious extremists and environmentalists who believe the technology is unethical and lets humans “Play God.” Some of these opponents also fear that Drucker’s ultimate goal is to use his money and growing influence with politicians to overturn the ban on human cloning, which will bolster his wealth and power even more. Over the course of the movie, it becomes clear that Drucker is indeed unfit to wield such power and that he’s a charismatic sociopath who doesn’t value human life.
Partly because he fears assassination, Drucker routinely makes “backups” of his mind using a brain scanning device, and he has instructed his inner circle of geneticists and gun-toting henchmen to secretly clone him if he ever dies. That way, his companies and his long-term plans will keep going forward no matter what. Unfortunately for Drucker, he does get murdered, and his living will so to speak is enacted. And unfortunately for Arnold Schwarzenegger’s character, Adam, he gets mixed up in the whole thing and becomes a target for assassination.
Adam is a middle-aged family man who runs a small helicopter business ferrying people from the city to the mountains where they can do things like snowboard or hike. Adam also employs a co-pilot named “Hank.” One day, Drucker’s people call Adam and hire him to take Drucker to the mountains for a brief ski trip. Before they depart, one of Drucker’s goons makes Adam and Hank use the brain scanning machine and submit DNA samples, lying to them that the brain scanner is a vision test machine and that a drop of blood is needed to make sure they aren’t on drugs. After all, this is the richest guy in the world they’re going to be carrying on their helicopter, and special precautions need to be taken.
At the last minute, Adam pulls out of the job and he tells his co-pilot Hank to fly Drucker for him. Hank does it, and right after they land on the mountain, a Christian extremist who somehow knew in advance Drucker was going there shoots them both dead and runs away. Drucker is able to make an emergency phone call to his goons right before he dies, and they scramble to enact his living will instructions. The film doesn’t show this, but they recover the two dead bodies from the mountain and use the secret cloning lab and brain scan data to clone them in two hours. Unfortunately, a major foul-up happens when they mistakenly clone Adam instead of Hank. Instead of looking at the pilot’s corpse, realizing it was Hank, and then cloning Hank, they just looked at the paperwork, saw Adam listed as the pilot for that day, and cloned him. Gross incompetence is a recurring trait among Drucker’s henchmen and it ultimately proves his undoing.
The henchmen program Adam’s newly made clone with Adam’s brain scan, and then dump him, unconscious, in a taxi and send it to the mall. When he wakes up, he doesn’t realize he’s a clone and just brushes off the fact that he can’t remember the last several hours of his day. No matter. Clone Adam goes shopping. The Original Adam is running errands elsewhere in the city and doesn’t realize he now has a clone. Both Adams are planning to go home to their family house that night.
Meanwhile, Drucker’s clone is having a meeting with his goons at his company headquarters building, surely upset over “his” murder a few hours before, when he realizes his henchmen mistakenly cloned the wrong pilot. He quickly grasps how disastrous this is, since Clone Adam will bump into the Original Adam, they will realize one of them is a clone, they will go to the cops, the media will announce that a human has been illegally cloned, and Drucker will be implicated since he runs a cloning business and hung out with Original Adam the same day the latter was cloned.
Drucker orders his henchmen to intercept Clone Adam before he gets home from the mall and kill him. During the confrontation, Clone Adam kills two of them and gets away. In spite of making two catastrophic mistakes in less than 12 hours, Drucker has these incompetent, dead henchmen cloned to serve him again. It’s stunningly poor judgement for the richest man in the world. I won’t go over every plot point after that, but the incompetence of Drucker’s henchmen and Adam’s ability to out-think and kill them gets inadvertently funny.
At the end of the film, one of Drucker’s henchmen accidentally shoots him in the stomach, fatally wounding him. Drucker then shoots the henchman in revenge, and with his dying breaths, Drucker starts making a clone of himself. One of Drucker’s other henchmen then accidentally shoots the cloning machine, causing Drucker’s clone to come out deformed and incomplete, and rendering it impossible to make any more clones to fix the problem. The exploding cloning machine also kills a third henchman by accident. Drucker’s deformed clone lives a few minutes before dying from something else.
In this film universe, people also die from being punched in the face or from the stereotypical “headlock movie neck snap” (if it were really that easy to break someone’s neck, wouldn’t it be happening all the time in real life?). It’s really silly, and The 6th Day got bad reviews for a reason.
Cloning’s centrality to the movie’s plot was clearly inspired by cloning of Dolly the Sheep, which happened just four years before the film’s release. While there are brief moments in The 6th Day when the ethics of cloning were discussed somewhat evenhandedly, in the end it degenerates into an action flick full of black-and-white Good Guys and Bad Guys. The pro-cloning people are all murderous sociopaths, and we cheer when Adam kills them all and blows up the secret cloning lab in the end. The preexisting biases of the audience–that human cloning is unethical, dying is a good and noble thing, and using technology to live forever is evil–are just confirmed, and no one is pushed from their comfort zone. The building full of bad people just explodes in a fireball.
I think The 6th Day was a forgettable film with a convoluted plot, overly simplistic characters, and unrealistic plot developments. Arnold Schwarzenegger’s salary clearly gobbled up a huge chunk of the movie’s budget, forcing corners to be cut in every other aspect of the film: The rest of the cast was B-list or worse (except for Robert Duvall, who was clearly not engaged in his role), and the cinematography was little better than a made-for-TV movie.
Analysis:
The 6th Day was released in 2000, and in the opening text crawl, the timeframe is ambiguously described as “The near future.” However, in a DVD featurette, Arnold Schwarzenegger supposedly says it takes place in 2015. The movie contains an assortment of technologies, some of which already exist, some of which we won’t have for 20 to 50 years, and some of which we may never create. As such, I think it’s safe to say it doesn’t accurately depict any specific moment in the future or past, so it will be no use for me to compare it to a particular year of reality (and it’s arguable whether the canon material provides a specific year, anyway), so instead, I’ll judge when (or if) the different technologies are likely to come into existence.
People will clone their dead pets. The film’s chief antagonist–Mr. Drucker–runs several large businesses that make use of cloning technology. One of them is called “Re-Pet,” and is a national chain store where people get their dead pets cloned. This prediction basically came true in 2007 when a South Korean company called “Sooam Biotech” cloned its first pet dog for a customer (the very first dog clone was made in 2005, but was made for scientific rather than commercial purposes). Since then, they’ve cloned around 600 more dogs, including a police rescue dog that searched for survivors at the Twin Towers wreckage. Other pet cloning companies have also been founded, though Sooam seems to be getting most of the global business.
Of course, I say the movie’s prediction has “basically” come to fruition because some aspects of it have yet to be realized. In The 6th Day, pet cloning was a mainstream practice that was cheap enough for upper-middle-class people like Adam (he owned a successful small business and had a nice house and antique car) to afford. Today, it costs $50,000, which is too high for anyone but a multimillionaire to casually pay for as Adam did. It should be said that the high cost of pet cloning is surely thanks in part to the low demand–if there are few orders for a product, then the firm supplying it won’t be able to take advantage of economies of scale, and low profit potential will discourage other firms from entering the market and driving down prices through competition.
The cost-performance curve of cloning procedures is surely sloping downward over time, but I can’t find any good data that I can graph and use to extrapolate a future year when cloning a dog will cost, say, $5,000 in today’s money so that average guys like Adam could afford it. For sure, the price isn’t dropping at Moore’s Law rates, since if it were, it would already be that cheap by now. This poses a major problem for me in assessing when this prediction and the movie’s other predictions about other aspects of cloning will be feasible.
I actually emailed two animal cloning companies asking for cost data, but got no response. In lieu of that, I’ll have to do my own crude estimates based on internet research. (BTW, if you can come up with better data than this, PLEASE feel free to send it to me)
The first cloned dog was created in 2005. While the company didn’t discuss its expenses, an outside expert estimated it cost more than $1 million. Much of the money was spent doing trail-and-error experiments until, after many failures, they found a cloning technique that worked. (Source: https://www.nytimes.com/2005/08/04/science/beating-hurdles-scientists-clone-a-dog-for-a-first.html) For that reason, it’s a cost outlier.
If we plug those three cost figures into a data chart and fit an exponential regression line to it, we get this:
If the rate of cost-performance improvement continues, it will cost $5,000 to clone a dog in the late 2040s. Again, I stress the coarseness of this estimate and the scarcity of data. However, I think that the sentiment is correct, in that pet cloning won’t get cheap enough for most people to afford until the distant future.
People will clone organs to replace their damaged original organs. Drucker’s human organ cloning business is only briefly mentioned in the film, which is sad since it stands out as an application of cloning that few would consider unethical. About 8,000 Americans die each year waiting for organ transplants, and others die after their bodies reject organ transplants because they have different DNA. Had the life saving value of this for people in need of new organs been explored more, the film would have been more intelligent and Drucker could have been a more sympathetic character.
As with pet cloning, technically this prediction came true in 2006 when the first human organs (urinary bladders) were made from cloned tissue. However, that was only doable because bladders are so simple (basically just elastic bags), and therapeutic cloning still isn’t good enough to make complex human organs like kidneys and hearts. I think it we’ll have to wait until the end of this century for that.
Refrigerators will monitor their contents and help you order new products as the old ones run out. Early in the film, before Schwarzenegger gets into all this trouble with sociopaths and clones and whatnot, we see the start of a normal day for him. He wakes up, goes downstairs to the kitchen for breakfast, and the display built into the door of the refrigerator warns him that it’s running low on milk, and asks him to push a “Yes” button if he wants to order more. That means the refrigerator is smart enough to know what’s inside of it, and is connected to the Internet so it can order things from retailers. This could be built today with existing technology.
“Smart refrigerators” with built-in interactive displays and WiFi are already commercially available, and we already have push-button instant online ordering. If the refrigerators had computers and cameras inside of them, pattern recognition algorithms could let the refrigerators accurately identify their contents, along with the freshness of those contents and how full their containers were. I don’t see how identifying a jug of milk should be a harder visual problem for computers than identifying any number of other objects they’re already able to identify with high accuracy, like letters of the alphabet, human faces, or common animals. If anything, food and beverages should be easier to recognize since there’s a more limited universe of things people put in their refrigerators, and because the packaging usually has writing on it describing what it is. This gets super easy when the packaging has a barcode.
If used the right way, this technology could significantly reduce food waste and improve peoples’ lives by serving as a sort of “automatic grocery list” whenever they went to the store, and by suggesting meals based on what ingredients were available and what was nearing its Use-By Date.
Biological tissue scaffolds will be used to quickly make clones. Drucker’s companies are able to make human and animal clones in only two hours because they keep full-body, DNA-free “tissue scaffolds” ready for use, floating in pools of preservative liquid. These generic bodies are called “blanks,” and when a clone is to be made, one of the blanks is infused with the original human or animal’s DNA, and rapid tissue growth is then stimulated. This is an idea that makes some sense, but because each human has unique body proportions (skeleton, musculature, organ shapes), there’s no way a single “blank” human body could be used to clone anyone and everyone.
Also, a human body contains tens of trillions of cells, and rapidly implanting the donor’s DNA into each of those in a blank would require technology that is several paradigm shifts ahead of what we have now. Additionally, the DNA would have to be migrated without damaging any of it in the process, unless you wanted lots of the clone’s cells to quickly die or become cancerous. I’m not even sure if this is possible with ANY level of technology. Pulling off this feat might require Star Trek levels of technology, and in that case, you probably wouldn’t need blank bodies since you could just quickly construct custom-made bodies using raw materials (like powder) in a vat full of bubbling liquid.
Using tissue scaffolds to help grow an adult human clone over the course of two months instead of two hours might be doable by the end of this century. A slower process like that would allow the DNA replication and tissue differentiation to happen with a much lower risk of error. A smaller number of stem cells that had been carefully injected with the donor’s DNA, and then tested to ensure no errors had occurred, could be implanted on something like a full-body organic scaffold and stimulated to rapidly grow and multiply. As I said in my 5th Element review, the subsequent growth process would have to be very closely monitored and regulated by machines.
Ultimately, it will probably be faster and easier to dispense with organic bodies, and to manufacture robotic “clone bodies” and then just implant the original person’s brain into them. The robotic bodies could be made to look outwardly identical to the person’s original, human body, but underneath, the bones, muscles and organs would be made of synthetic materials. The only organic components might be the nervous system, which would interface with the person’s brain. The squishy androids from the Aliens movies and the semi-organic T-800s from the Terminator movies should give you some idea of the hybridization I’m imagining.
We might actually invent ways to make robotic, adult clone bodies before we invent ways to rapidly make organic, adult clone bodies. Synthetic materials are just much easier to work with.
We will be able to read and copy people’s minds using technology. In the film, Drucker’s companies have an advanced tabletop device called a “syncorder” (SIN-cord-ur) that is able to scan a person’s brain in a few seconds and capture all of their memories and personality traits as digital data. Users stick their faces close to the machine and the scan is done through two lens-like protrusions that interface with the eyes. This type of technology won’t exist for a hundred years, and possibly never.
The things that truly make you “you” are indeed contained in your brain, in the form of neural structures and synaptic connections that form your memories and personality traits. Appropriately, this unique brain network is called the “connectome.” However, we’re incredibly far away from understanding the physical mechanics of this (e.g. which brain structure corresponds to which type of memory), let alone being able to make a brain scanner with good enough resolution to see the relevant cell-sized (or smaller?) physical features.
If it is possible to read someone’s mind, it will be much more invasive and time-consuming than the five-second syncording process shown in the film. Imagine something more along the lines of having to stick your head into a hole in a giant scanning machine for several, multi-hour sessions while you are guided through different thought exercises designed to evoke certain emotions, memories and cognitive operations while your brain activity is monitored. Or, if nanomachines can ever be built (another big “if” that we’re still not sure the laws of physics allow), having billions of them injected into your brain to map the shape of each cell. It might just be impossible.
However, while brain scans might prove impossible or possible only in the distant future, I think within two decades, we’ll be able to make very accurate digital “copies” of people that mimic their personalities. Mass surveillance will also effectively mean that many of your life experiences will be recorded, and hence, your memories could be mostly deduced by machines. I say “mostly” because human memories are frail and subject to all forms of manipulation, so your unique set of memories aren’t an accurate catalog of your life experiences. Machines would have to, by observing you and your brain activity, figure out where your mental distortions and gaps were.
An interesting consequence will be the rise of immortal, digital avatars of all humans. Long after a particular person died, a computer program or lookalike robot that faithfully mimicked their behaviors, personality, speech, and that could describe the same memories would like on. Far from being an automaton, such a machine could be endowed with artificial intelligence, contoured to reflect the intelligence and psyche of the original human. This would raise new questions for us about the nature of death and individual identity that I can’t explore here.
We will be able to implant memories and personalities into cloned humans. In the film, the syncorder machines are like CD burners: they can copy memory files from people and also implant memories into people. In both cases, you just need to look into two appendages and push a button. When Drucker’s goons clone Adam, then implant his unconscious clone with Original Adam’s memories using the machine. Since Original Adam was only syncode-scanned a few hours before, Clone Adam doesn’t have enough missing time in his short term memory to make him suspicious anything strange happened aside from an afternoon nap. I doubt we’ll be able to implant memories in people for 100 years, possibly never. Doing so would require the ability to physically alter the brain at the cellular and possibly intracellular levels. The only technology I can think of that might be able to do that is nanomachines, and progress making those is going at a snail’s pace. Some scientists believe that just can’t be made.
The standard sidearm will be a laser/plasma pistol. In the movie, all the bad guys carry energy pistols that fire glowing bolts of some sort instead of bullets. They also don’t make the standard “pop” or “crack” sounds of firearms, and instead make indescribable “Zhweee” noises. When fired, the guns produce very large muzzle blasts, and they cause burn damage to the humans and hard objects that they hit. The bolts are more damaging than handgun bullets, but the energy pistols also seem to have slower rates of fire than gunpowder handguns. Almost every time someone shoots a person or object with an energy pistol, I can’t see how gunpowder handguns like Glocks wouldn’t have done the job adequately. The only exception is when two henchmen use their energy pistols to shoot down one of Adam’s charter aircraft.
I don’t think directed energy pistols like this are technologically feasible, so they won’t ever be common, and even energy weapons as big as large rifles will forever be rare. For why, read my Terminator review.
Bans on human cloning will be enforceable. Drucker has to keep his human cloning lab secret because human cloning is illegal. A few brief lines of dialog explain that the ban has existed for a few years, and was put in place because the first human cloning attempt failed in some grotesque way.
National bans on cloning could be sidestepped by going to other countries where it was legal, and enacting an international ban is unlikely since there is profit to be made by providing the service. For this reason, people evade national-level restrictions on abortion, sperm donation and IVF today. The 6th Day correctly shows that elected politicians will help bring down anti-cloning laws once they realize they can personally benefit from it.
And as the global drug war clearly shows, even if an international ban existed, the procedure would still be available at underworld labs and clinics, particularly in countries with weaker rule of law. This problem would only worsen with the passage of time as cloning equipment got cheaper and the technical know-how got more common.
To stop human cloning, laws will criminalize the clones themselves, and government forces will kill clones upon discovery. Several times in the movie, it is mentioned that the original cloned human was “destroyed,” and that the law against human cloning also directs the government to kill clones. And after discovering that an impostor (actually Original Adam) is at his house, Clone Adam (who at that point in the film doesn’t yet realize HE is the clone) plots to kill him, since “There’s no law against it” and “He’s not human.”
I can’t see how a law authorizing the murder of cloned humans would ever be enacted in a country that respected human rights. The 6th Day was filmed in Vancouver, and while the location of the fictional setting was kept ambiguous, it was clearly set in the U.S. or Canada. Legally and culturally, neither country would ever let adult humans be killed merely because they were clones. National bans on human cloning procedures are entirely realistic, as are harsh punishments for doctors who do the procedure, but the clones themselves would be held blameless.
People will know what it’s like to die. For comical effect, there are several instances where Drucker’s cloned henchmen talk about their bad memories of Adam killing their previous selves. One henchman who gets his torso run over while trying to kill Adam in a car chase complains of phantom chest pains, even though his body bears no injuries since it is a healthy clone of the dead original. He also seems psychologically scarred by the implanted memories of his traumatic death. At another point, a different henchman says to the group: “Knock it off, we’ve all been killed before.” I think humans and machines will someday be able to speak of death in the past tense like this.
Once human cryonics and other forms of induced stasis become possible, people will medically die and then be brought back to life years later. In every sense of the word, they will have experienced death, and might have memories right up to the moment of expiration.
Also, if the sort of brain implant technology analyzed in my Aeon Flux review is ever invented, then people in the process of dying will be able to directly share their sensations with other humans and with machines, so you could know what death feels like remotely.
In addition, because machines are more resilient and more easily repaired than biological life forms, I think it will be common for intelligent machines that have been “killed” to be brought back to life in repair shops. It would be little different from removing your hard drive from your wrecked PC and installing it in a new PC.
Let me make a few predictions about this: Death will be so traumatic that it will be common for revived humans not to remember the actual event or the moments leading up to it. Humans who experience it remotely through brain implants will by turns be horrified (ten times worse than watching an internet gore video) or find that it feels no different than falling asleep. Machines will most likely have crystal clear memories leading up to the moment of death, with unpredictable effects on their psyches. Death itself will literally feel like nothing. Everyone will understand it is just a state of nothingness, like a dreamless sleep or the same way things were for you before you were born. The notion of there being an afterlife will become even less credible as the number of “formerly dead” people grows and they all describe the same nothingness.
The moods and actions of animals will be controllable with technology. In one scene, Drucker’s henchmen abduct Adam’s wife and daughter in order to blackmail him. They make use of remote-controlled Doberman dogs for this. One of the henchmen uses something like a smartphone app to remotely issue commands to the dogs, which they receive through high-tech collars. Glancing at the smartphone screen for a second, the henchman appears to have push-button options to grossly control the dogs’ behavior, for instance telling them to “Stop” or to “Attack.” The dogs corral Adam’s family into a corner, and then the henchmen arrest them.
This is already possible using existing technology found in dog training “shock collars.” Using electric shocks of varying intensity, vibrations, and sounds (some of which are outside human hearing ranges), the collars can help humans to train dogs and to control their behavior.
The long-term implications of this technology are interesting to ponder. At some point, it will be possible to cheaply manufacture shock collars embedded with hi-res cameras, microphones, GPS trackers, and other sensors that monitor the animal’s surroundings and physiological status. At minimal cost, it will become possible for humans to attach collars to all pets and even millions of wild animals. Highly accurate estimates of animal populations, health, and migration patterns would become possible. Encounters between humans and dangerous animals like alligators and bears could be headed off in advance if the animals’ GPS coordinates were known and all humans within a certain radius were warned of their presence via automated texts to their smartphones. Poaching would become much harder if any large wild animal had cameras on it.
The collars themselves will also shrink in size and weight, as is generally the trend for all types of electronic devices. Eventually, they could very well evolve into implants that the animals wouldn’t even be aware of, and might directly interface with their nervous systems.
Robots and AIs will eventually provide us with practically free and almost unlimited amounts of labor (see my I, Robot review), meaning it will become feasible to tag billions of animals at low cost, to continuously monitor them, and to issue gross commands to them. This seemingly crazy vision of a “tamed wilderness” is just an extension of two other broad, long-term technology trends: 1) the rise of mass surveillance and 2) the fusion of organic life with technology. I think it’s also a clear stepping stone to a technological “hive mind” or single consciousness.
While most people would be quick to point out potential misuses of this technology, the potential good uses are very compelling. If every animal on the planet could be continuously monitored and controlled, we could end or at least sharply reduce animal suffering by ending predation and singling out unhealthy animals for veterinary treatment. Violent encounters between humans and animals could also be eliminated. Animal reproduction rates could also be carefully controlled, keeping ecosystems in balance. Humans, the species that has caused the most suffering and damage on this planet, could repay their debt by inaugurating a new age of empathy and harmony. Only we can make technology, so only we can do this.
Finally, I’ll take the next logical step here and get myself into trouble by suggesting this same technology might someday find wide scale use among human beings, and it might actually make the world better. Like animals, humans sometimes get out of control and need various forms of “help”, and, if things could be managed responsibly, I could see how prods from a brain implant or something could help people behave civilly and avoid self-destructive behaviors and thinking.
Due to heavy losses, the Russians are increasingly sending obsolete BMP-1 armored vehicles to fight in Ukraine. It’s inferior in every way to the newer BMP-2. Russia knows of ways to upgrade the BMP-1’s weapons to make it more effective, but lacks the money to do so. https://youtu.be/l5arYlXSVQA
The Russians are so hard up that they’re making tanks out of mixes of old spare parts. In the photo, that weird structure jutting up from the top of the vehicle is actually a turret from a small Soviet warship. The turret and its two heavy machine guns were made in the 1950s, and it was pulled out of some rusted hulk of a ship and plopped down into the top of an MT-LB tank (itself obsolete) that was missing its own gun. https://youtu.be/v7NCo9T54U8
Critical parts shortages have forced Russia to send obsolete T-54s to fight in Ukraine. Russia might have 1,000 better T-72 tanks in reserve, but it can’t send them to fight at once because they have to be fixed up first, and there’s a bottleneck of some kind involving one or a few types of components. For all their deficiencies, the T-54s recently seen on the move towards Ukraine are fully operational. https://youtu.be/uRboVa5zyUk
To be fair, other countries have been forced to raid military museums for parts to use in frontline military equipment. https://youtu.be/B372GirZ3Cs
Russia’s winter offensive has failed to change the strategic balance and has just killed and exhausted large numbers of troops on both sides. In proportion to their population sizes, Ukraine and Russia have suffered about equally. https://youtu.be/qPhycuLAtaw
It’s all the more remarkable since the MiG-23 has a poor safety reputation even under normal conditions. Consider that the jet was built to replace the older MiG-21 fighter, but was retired from service sooner than the MiG-21. https://youtu.be/A4LK6mtmZ3E
Recently, some interesting new guns–including this revolver/shotgun–have been invented, but whether they are BETTER than older, more common gun designs is questionable. Maybe “Late Stage Capitalism” has taken over the gun industry. https://youtu.be/bvtLdKfsvSk
‘The fission weapons described above have a theoretical limit to their yield, and the largest such weapon ever developed had a yield of 500 kilotons. Fusion weapons have no such upper limit’ https://ee.stanford.edu/~hellman/sts152_02/handout02.pdf
The Lathe of Heaven was written in 1971, set around 2002, and described a planet wracked by extreme heat and industrial pollution, and in a state of near-famine due to overpopulation at 7 billion people. https://en.wikipedia.org/wiki/The_Lathe_of_Heaven
The EIA says that, in spite of the rise of electric cars, demand for gasoline and diesel fuel will stay high until at least 2050, and the U.S. will remain a net oil exporter until then. We’re never going to run out of oil, as countless “experts” and sci-fi authors predicted from the 1970s to the 2010s. https://www.eia.gov/todayinenergy/detail.php?id=55840#
Bill Gates says the recent pace of AI advancement has surprised him, that the chatbots the public has access to now (like ChatGPT) actually use technology that is a generation old, and that he’s privately interacted with much more advanced variants. He also says the chatbots that are being created aren’t threats to humankind, and probably won’t lead to true AGI. https://www.ft.com/content/4078d407-f021-464a-a937-11b41a4afb91 https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
Economist Bryan Kaplan just lost a public bet that no computer would be able to pass one of his Economics midterm tests (he’s a professor) before January 2029. GPT-3 got a “D” on the test this January, but GPT-4 just got an “A.” https://betonit.substack.com/p/gpt-retakes-my-midterm-and-gets-an
A research team at Microsoft claims that GPT-4 has some elements of an artificial general intelligence (AGI). It’s able to do things that exceed what is in its training set of data, meaning it has some ability to reason and to make inferences. https://arxiv.org/abs/2303.12712
Here’s a needed reality check on “AI.” While the systems we now have are powerful and can be made much more so, they’re not actually “intelligent” and never will be. https://youtu.be/GzmaLcMtGE0
Elon Musk thinks humanoid robots may someday outnumber humans. I think robots designed for labor will outnumber humans eventually, though its unclear whether those of them that have humanoid body layouts will be more numerous than we. https://finance.yahoo.com/news/elon-musk-says-humanoid-robots-221414098.html
A computer program called “VALL-E X” can allegedly translate recordings of spoken words from English to Mandarin, while preserving all the unique vocal characteristics of the speaker, including their accent. I have predicted something like this would have to wait until the 2030s to be invented! https://vallex-demo.github.io/
An extinct civilization that reached our level of technology before collapsing would have left enough evidence of its existence for us to have found it by now, even if they died out millions of years ago. If the extinct civilization had merely reached Industrial Revolution levels of development within the past several tens of thousands of years, we would have also found evidence. The only plausible type of yet-undiscovered “lost civilization” is a pre-Ice Age group of people about as advanced and as numerous as the Celts who built Stonehenge. Their impact would have been small enough that all traces of them could have been wiped out, or at least obscured so much that we have yet to find the evidence. While the discovery of such an extinct group would be interesting, it wouldn’t revolutionize archaeology or provide us with new types of science or technology. https://astralcodexten.substack.com/p/against-ice-age-civilizations
‘A predatory songbird, the Northern Shrike sits quietly, often in the top of a tree, before swooping down after insects, mice, and small birds. It kills more than it can eat, impaling the prey on a thorn or wedging it in a forked twig. On lean days it feeds from its larder.’ https://www.borealbirds.org/bird/northern-shrike
Men with high testosterone are more aggressive, less reliable, and likelier to abandon their children. Sons who grow up without fathers are likelier to have elevated testosterone as well, and then to go on to be absent fathers like their own dads. This is a case where genetics, biological development (epigenetics?), and social factors amplify each other. https://www.economist.com/science-and-technology/2022/06/01/fatherless-sons-have-more-testosterone
A woman who managed to led a normal life in spite of being born without a large portion of her brain shows the organ’s remarkable ability to rewire itself. I wonder if genetic path dependence has left humans saddled with brains that are fundamentally inefficient in some way(s). It would be interesting to see an AGI design a perfect organic brain from scratch. https://news.mit.edu/2023/studies-of-unusual-brains-reveal-insights-brain-organization-function-0221
The human limit: ‘Even heat-adapted people cannot carry out normal outdoor activities past a wet-bulb temperature of 32 °C (90 °F), equivalent to a heat index of 55 °C (130 °F). The theoretical limit to human survival for more than a few hours in the shade, even with unlimited water, is a wet-bulb temperature of 35 °C (95 °F) – equivalent to a heat index of 70 °C (160 °F).’ https://en.wikipedia.org/wiki/Wet-bulb_temperature
Putting aside all the jokes about Russia’s underperformance and the grim videos of its men dying on the battlefield, they have massively built up their forces to 300,000. Even if their training is mediocre and half of their equipment is obsolescent, they’re still a formidable force. Ukraine’s generals predict a major Russian attack by the first anniversary of the invasion, February 24. https://foreignpolicy.com/2023/02/08/ukraine-russia-counteroffensive-abrams-tanks-putin-war/
Russia’s planned increase in the size of its military to 1.5 million people by 2027 will fail because the country lacks enough young men for it. The country’s population has been shrinking and graying for decades now, and the problem accelerated thanks to COVID-19 and to the Ukraine War causing large numbers of its younger people to die or flee. https://finance.yahoo.com/news/demographic-challenges-weigh-russia-military-200000752.html
At a commemoration of the 80th anniversary of the Battle of Stalingrad, Putin said “We are again being threatened by German Leopard tanks.” https://www.bbc.com/news/world-europe-64502504
Small bomber drones seem more effective than they really are due to survivorship bias: we don’t get to see the videos where the attack fails or the enemy shoots down the drone. https://youtu.be/AlpZf1hpQYM
It’s amazing how hamstrung the U.S. submarines were by their defective torpedoes and bureaucratic stubbornness in fixing them for the first half of WWII. https://youtu.be/KSDtGXW7J7I
Fighter plane dogfights are incredibly rare in real life, and when they do happen, the opposing planes only get close to each other for brief instants. Prolonged, tight-turning engagements only happen in movies. https://youtu.be/cbEBr0DDKWQ
The concept behind CAMELEON active camouflage is simple, and though it was never used, I believe it will return once the technology is better and cheaper. https://youtu.be/zLdNeatXCvE
‘A pure fusion weapon is a hypothetical hydrogen bomb design that does not need a fission “primary” explosive to ignite the fusion of deuterium and tritium, two heavy isotopes of hydrogen used in fission-fusion thermonuclear weapons. Such a weapon would require no fissile material and would therefore be much easier to develop in secret than existing weapons.’ https://en.m.wikipedia.org/wiki/Pure_fusion_weapon
A man used ChatGPT to auto-generate conservative responses to liberal Twitter users in an argument about Food Stamps. The liberals were infuriated and had no clue they were dealing with a machine. It could have kept them tied up arguing forever. https://bullfrogreview.substack.com/p/honey-i-hacked-the-empathy-machine
A good example of how humans unconsciously move the goalposts for “true intelligence” higher whenever machines get smarter. ‘When I showed my friends the sonnet by ChatGPT, they called it “soulless and barren.” Despite following all the rules for sonnets, the poem is cliche and predictable. But is the average sonnet by a human any better? Turing imagined asking a computer for poetry to see if it could think like a person. If we now expect computers to write not just poems but good poems, then we have set a much higher bar.’ https://www.washingtonpost.com/books/2023/02/13/ai-in-poetry/
The strongest military will someday be the one that totally automates itself to make the more efficient use of its resources, to design and field the best weapons, and to employ the best strategies and tactics. If the DoD still has a lumbering bureaucracy in 50 years that slows down and messes up every idea suggested by its military AIs, it will lead to disaster. Of course, turning over all military decisions except perhaps the very top ones could also lead to disaster. If we don’t do it, though, China, Russia, or some other country will. We’re locked in a race with no way out but to run faster and faster. https://www.wired.com/story/eric-schmidt-is-building-the-perfect-ai-war-fighting-machine/
‘The T2T consortium used new DNA sequencing technologies and analytical methods to generate and assemble the remaining 8-10% of the human genome sequence. However, the researchers assembled those fragments manually — a process that took this massive and highly skilled team several years to complete. Verkko can finish the same task in a couple of days.’ https://www.genome.gov/news/news-release/nih-software-assembles-complete-genome-sequences-on-demand
‘The Energy Department now joins the Federal Bureau of Investigation in saying the virus likely spread via a mishap at a Chinese laboratory. Four other agencies, along with a national intelligence panel, still judge that it was likely the result of a natural transmission, and two are undecided.’ https://www.wsj.com/articles/covid-origin-china-lab-leak-807b7b0a
By the late 21st century, Earth had become an overpopulated, diseased, polluted nightmare. The small number of super wealthy people escaped by building a large space station in Earth orbit and moving there. The station, called “Elysium,” is a bucolic paradise where everyone lives in a mansion, is protected by robot police, and has a personal rejuvenation pod that fixes any illness or injury when they lie down in it.
The film’s events take place in 2154. Elysium’s only problem is illegal immigration: poor people with major health problems smuggle themselves onto Elysium, and in the few minutes they have from the time their beat-up space ship dumps them onto the grass to the time they get arrested by robot cops, they try to break into a mansion and use one of the rejuvenation pods. Even though Elysium’s government seems to have a handle on the problem since they quickly arrest and deport them all, a government official played by Jodie Foster doesn’t think they’re doing enough, so she has a mercenary named “Kruger” do the dirty work of blowing up illegal immigrant space ships, killing dozens of people at once. After a verbal reprimand from Elysium’s president, Jodie Foster decides to do a military coup.
Matt Damon exists on the opposite end of the spectrum, living in a Los Angeles slum and working a horrible factory job where his boss yells at him all the time and he has no rights. One day, the machine he is in charge of breaks and he has to go inside to fix it. The door accidentally closes behind him and it turns on, zapping him with a dose of radiation that will kill him within five days.
Because Earth hospitals are so poor, his only hope is to illegally immigrate to Elysium to use a rejuvenation pod. He doesn’t have any money, so he can only get a ticket by agreeing to help an underworld crime boss kidnap a rich guy at gunpoint so they can basically steal his ATM pin number by hacking his electronic brain implant (rich people have these). Before Matt Damon goes on this criminal mission, he lets the crime boss upgrade his body with a screw-in exoskeleton kit that gives Damon superhuman strength and his own brain implant.
The job goes bad–Damon’s criminal compatriots accidentally shoot the rich guy in the chest. Instead of trying to render medical assistance, they connect a wire into the rich guy’s head and download his data into Damon’s brain implant. The rich guy dies, it turns out the data is encrypted so the criminals can’t make sense of it, and Kruger shows up and kills them all except Damon, who escapes into the slum.
Matt Damon then becomes the world’s most wanted man because it turns out he has the rich guy’s access codes to the Elysium mainframe, which are super important because they let the user reboot the system and make all humans Elysium citizens. Jodie Foster also wants the codes for her coup.
I won’t spoil the ending, but it’s exactly what you’d expect from Hollywood. I disliked Elysium for its clumsy, excessive moralizing, rushed pacing, and poorly thought out plot. Matt Damon, one of the greatest American actors of his generation, was disengaged in his role and almost looked like he didn’t want to be there. And while some futuristic elements in the movie will probably prove accurate by 2154, like humanoid robots, overall it was totally unrealistic and nonsensical. For example, if rejuvenation pods are the catalyst for illegal immigration, why doesn’t Elysium just give some pods to Earth so the poor people won’t need to go to space and bother them? Why isn’t there a single enterprising rich person on Elysium who sells some pods to Earth to make money for himself? If the people on Earth know that pods exist and know what they do, why can’t they pool their resources to copy the technology and make their own?
Also, before watching this anti-rich people movie, ask yourself how the world got that messed up to begin with. Did it become overpopulated thanks to rich people having huge numbers of kids? Diseased from rich people doing IV drugs and spreading AIDS? Polluted from rich people driving around all X billion cars there are in the world? Did rich people spray paint the buildings in Matt Damon’s slum and throw trash all over it? Absolutely not. If the world ends up as bad as it was in the film, it will be thanks to the bad decisions of billions of people, 99% of whom aren’t rich. In summary, in trying to make a commentary about the present, Neill Blomkamp (ironically, a multimillionaire) sacrifices accuracy depicting the future, and leaves us with a cool-looking but hollow and forgettable film.
Analysis:
The world will be ruined. In the film, Los Angeles was a gigantic slum, and these scenes were shot in the real-life slums of Mexico City. Aside from advanced flying vehicles, military exoskeletons and robot police, Earth’s technological state appears inferior to what it is today. This is unrealistic. By 2154, cities like L.A. will probably be much nicer than today, and extreme poverty will probably be eliminated. The historical record shows that living conditions have been improving across the planet as a whole since the Enlightenment, and the trend is unlikely to change.
There will barely be any white people in Los Angeles. Aside from Matt Damon and a few colleagues at his factory job, no white people are shown living in L.A. This will prove an accurate depiction. Whites became minorities in L.A. and California in the 2010s, and nationally will be minorities around 2045. Their share of the L.A. county population is forecast to keep declining for the foreseeable future.
By 2154, nonwhites, including mixed race people, will comprise the overwhelming majority of the U.S. population. By that point in the future, medical immortality, decreased fertility among all races, and lessened need for immigration thanks to machines doing all the work will cause the racial makeup of the planet to stabilize (this is why I don’t think white people will ever “go extinct” as racist alarmists contend).
Well before 2154, the large population of mixed race people and widespread use of genetic engineering to give people stereotypically “white” traits (light-colored eyes, hair and skin) will seriously scramble our future concept of race. Genetic engineering will also be used to add unnatural traits to the genepool, like orange hair and purple eyes, resulting in significant numbers of humans not resembling any race. Some human beings will have also upgraded themselves and fused with their technology so radically that they won’t belong to any race, and will find the concept irrelevant to their self-identities.
The rich elites will still be overwhelmingly white. Elysium is 90% white, in contrast to the impoverished Earth. While disproportionate wealth and power will stay in the hands of white Americans for generations even after they become minorities, and Europe will also retain its outsized wealth for some time, a lot will happen over the next 141 years to level the playing field. At the very least, all East Asian countries will attain Western standards of living and income. More likely the whole world will have caught up, and in no small part thanks to machines becoming common everywhere and taking over work from humans. In making almost all the Elysium residents white, director Neill Bloomkamp again tried to make a social statement in terms we are familiar with today, but at the expense of realism.
Robots will be everywhere. The film featured robots cops, parole officers, doctors, and emergency workers that were just as capable as humans. This will come to pass well before 2154. However, I disagree with the movie’s depiction of these robots all being mechanical-looking, with all their gears and metal surfaces exposed, and I don’t think they’ll have stereotypically machine-sounding voices. They will be more refined, and some will be indistinguishable from humans (androids). Even today’s technology allows machine voices to sound almost the same as natural human voices, and before 2040, they will be indistinguishable.
Humans will still work in factories. Aside from that fact that it makes a futuristic product (robots), Matt Damon’s workplace is the same as a modern-day factory: Human workers in overalls show up every morning and work on the crowded shop floor, pushing buttons, pulling levers and pushing carts full of parts around. The absurdity of this is striking: If the factory is making intelligent, dexterous, humanoid robots, why don’t the managers replace the human workers with some of their own robots?
Labor-intensive factory jobs like those in the film will disappear in developed countries around the middle of this century. Small numbers of highly trained human workers will remain in the factories to oversee machines, but they won’t do grunt work like Matt Damon.
By the end of this century, no one on planet Earth will do labor-intensive factory work, and most factories will be 100% automated. If you think this can’t happen because humans will always be needed to fix the machines, you are wrong. As I said in my review of Terminator, there’s no reason machines won’t eventually be able to build and fully repair each other.
Medical technology will be able to fix almost every problem. To fix any ailment, the rich people need only lie down in a rejuvenation pod and wait for its mechanical “arms” to wave back and forth over them. In this way, even deadly conditions like cancer are fixed in a few seconds. Kruger’s horribly destroyed face is thus reconstructed after a battle with Matt Damon. Curiously though, the machines can’t correct the cellular-level damage that causes old age, and there are some old-looking people walking around Elysium.
This level of technology will exist by 2154, though most health problems will still take much longer than one minute to fix. Massive trauma like having your skull crushed will be impossible to fix, as will reviving people who have been dead and rotting for more than a couple hours. However, diligent use of future medical technologies will be able to keep people young and reverse the aging process.
People will still die of leukemia. A subplot of the film involves the daughter of Matt Damon’s ex-girlfriend. The daughter is about to die from leukemia unless she gets advanced treatment in Elysium. Even though the ex-girlfriend is a nurse and presumably has access to superior medical services since she works in a hospital and has doctor friends, Earth is just so poor and backwards that they can’t cure the daughter. Even though Elysium is hoarding the rejuvenation pods, there’s little reason to assume conventional leukemia treatments wouldn’t be able to cure the disease with over 100 more years of research.
There will be a space station miles in length/diameter orbiting the Earth that can be plainly seen in the sky. Elysium is 37.3 miles wide and orbits 4,000 miles above the Earth. Even in the daytime, the station is visible from the planet’s surface, and its circular shape can be made out. According to other calculations, an object only one mile wide could also be clearly seen if its orbit were the same as the International Space Station, which is a mere 254 miles up.
While the technology and money to build such space objects will be available by 2154, I’m unsure if the investment will actually be made. For one, while it would make sense to build some types of massive objects in space like solar panel arrays and sunshades (to ease global warming), they would be positioned so far from Earth that people on the ground wouldn’t be able to see them.
We’ll be assembling space ships in space by 2154, but I’m not sure if we’ll be doing it in low Earth orbit. The LaGrange Points probably make more sense. Even if we did build them in LEO, I don’t see why any of them would need to be a mile or more in length (for what purpose), nor would any “space factories” that built them need to be that large.
I don’t think the rich will ever move to a giant space station because they decide Earth sucks, but I do think there will be at least one “space hotel” in low Earth orbit by 2154 that caters to rich people. Even that far in the future, rocketing enough material into space to make a mile-wide space hotel will be too expensive, and there won’t be enough customer demand to fill all the rooms anyway.
And while I wouldn’t be surprised if there were one or more “space hotels” in low Earth orbit that catered to rich tourists by 2154, they wouldn’t have enough clientele to justify being a mile or more in diameter. However, I can see a workaround: Massive sheets of Mylar.
Imagine a luxury space hotel that’s similar in size to a cruise ship. It’s basically an elongated box measuring 1,000 ft x 200 ft x 150 ft, which is in the same size range as a real cruise ship. Even in low Earth orbit, it’s still too small to see from the ground. To fix that problem and hence boost the station’s publicity, huge “wings” or “sails” are attached to its sides. Made of Mylar, the sails are very lightweight and compact, meaning it’s affordable to rocket them into space. Once attached to the sides of the station, they’re unrolled and oriented to face Earth, making the station look much bigger. It would kind of resemble a butterfly, with an elongated, relatively compact “core” with very thin, flat accessory protrusions on either side.
The station’s wings/sails would have no functional purpose. While many people would protest plans to mar the sky with such an object, it might be built anyway. NIMBY’s don’t always win.
Robot exoskeletons will exist and will give wearers superhuman strength and endurance. Matt Damon has one of these “grafted” to his body, and it proves invaluable in the many fistfights he has with killer robots and mercenaries, and in the self-extrications he does freeing himself from crashed vehicles and prying apart heavy metal doors that are trying to close on him. These will definitely exist by 2154, but they will not be crudely screwed into wearer’s bodies (during the “operation” where this is done, they don’t even take Damon’s clothes off, so he’s wearing a ridiculous bloody T-shirt UNDER his exoskeletion for the rest of the movie). As I concluded in my review of Edge of Tomorrow, the first combat exoskeletons could make their debut in the 2050s, 100 years before the film is set to happen. With an extra century of development time, they should be significantly better than what Matt Damon had.
Highly refined brain-computer interfaces will exist. In the film, the rich people have small devices sticking out of their heads resembling cochlear implants which allow them to interface their brains with computers. Files can thus be directly transferred between the two. Devices like these will be common by 2154, though they will probably be completely internal, meaning they won’t have parts sticking out from the person’s skin.
Old guns will use new ammo. Matt Damon uses a normal pump-action shotgun to fire a tiny sticky bomb onto a rich guy’s flying car. After the car takes off, Damon remotely detonates it and the car crashes. During the ensuing battle with the rich guy’s two robot guards, Damon kills one of them using a 200-year-old AK-47 firing proximity-fused explosive bullets that are linked to a control computer in a small gun sight.
The concept is clearly borrowed from the XM-25 and shows where the technology will be once refined. I really liked this as it shows high technology being seamlessly incorporated with low technology in a realistic way, and it nods to the fact that the basic gun designs we have today are optimal or close to optimal, so further performance improvements will have to come from peripheral things like better ammo and sights.
By 2154, gun sights will provide a composite picture that intelligently overlays images from several parts of the electromagnetic spectrum. They will have computers that can recognize objects and humans, and visually highlight them for the shooter’s benefit. The scope computers will also have ballistic calculators that move the target reticle based on factors like distance, inclination/declination, wind velocity, air pressure, humidity, and temperature of barrel.
The guns themselves might have self-aiming mechanisms like the Smartgun from Aliens had. A rifle would have a sort of metal “frame” around it, and at several different points, levers and metal cables would connect the rifle to the inside of the frame. By telling those levers and cables to tighten or slacken, the scope could quickly make fine adjustments to where the barrel was pointed, compensating for flaws in the shooter’s aim.
Routine use of highly advanced ammunition incorporating better propellants and features like timed airburst, tandem warheads, steering fins, and mini guided rockets will also make guns more accurate and deadlier against a greater range of targets. The guns of 2154 will also have computers built into them that will link with the user’s brain computer, allowing the person to instantly “know” where to point the weapon to hit the desired target without having to look through a sight.
Combining all of these technologies, the mechanical “guts” of a 200-year-old AK-47 could be used to make a future rifle with incredible capabilities. A better aiming system would double the maximum range at which it is lethal against humans, and make it possible to rapidly shoot the weapon from the chest with the same accuracy as today’s careful sniper shots from bolt-action rifles. The weapon could even shoot down low-flying aircraft, cripple vehicles from long distances with bullets through their vital components like tires and gas tanks, or even disable tanks by destroying their fragile external sensors or sending bullets directly down the barrels of their main guns to hit the shells loaded in them.
Small homing weapons will kill people. During Matt Damon’s botched kidnap attempt on the rich guy, Kruger arrives and kills one of Damon’s accomplices with hand-sized, frisbee-like flying objects that home in on targets that Kruger marks with a small laser. Once they reach their targets, they latch onto them and explode.
Smart weapons like these will be old technology by 2154, and in fact will probably exist within 20 years and take the form of tiny quadcopter drones. Since it might be too hard for them to latch onto targets, especially if the targets are moving or able to swat the drones down, they will probably be programmed to blow up once they get within a few feet from the target, or upon colliding with any part of it.
Facial recognition software will be in common use, even among robots. Throughout the film, surveillance cameras with facial recognition software are used to identify people in public places. Quadcopter drones with cameras also do this when looking for Matt Damon. These will also be old technologies by 2154.
Facial recognition software is already quite reliable, and is sometimes paired with fixed-position surveillance cameras, particularly in higher-tech authoritarian countries like China. However, the software’s accuracy gets worse as the angle at which the camera is placed gets steeper. In other words, a camera six feet off the ground, pointed straight at a person’s face will be able to recognize them easily, but the same camera installed 20 feet off the ground on top of a pole, looking sharply down at the same person so it mostly just sees their hair, will struggle to tell who they are.
For this reason, aerial drones are currently unsuited for autonomously tracking down specific humans. However, that will surely change once more biometric data on people becomes available. Future robots that walk around at ground level with us will recognize us easily thanks to having unobstructed views of our faces and bodies. In the future, you’ll never be a stranger to a robot, or to a human with access to facial recognition software.
Super guns will exist. During the final battle on the Elysium station, Matt Damon finds an advanced automatic rifle with “CHEMRAIL” written on the side and he uses it to kill a bad guy. The gun makes electronic noises when “charging up” and firing, and the bullets are propelled with such force that they easily pass through a wall and literally tear his opponent apart. Canon Elysium literature states that the gun uses electromagnetic forces instead of exploding gunpowder to propel the bullets, and that the bullets leave the gun with 18,000 Joules of energy. That’s powerful, but no unfathomably so: A .50 caliber bullet (used in some sniper rifles and heavy machine guns) has 15,000 Joules.
Small arms with this level of power will be more common in the future because robots and augmented humans that are strong enough to carry and shoot them will exist. A human wearing an exoskeleton could fire such a weapon on full auto like Matt Damon did, but an average person could not. There was a major error in the battle scene since Matt Damon had the CHEMRAIL gun pressed against his shoulder and was holding the handle with his bare hand. His exoskeleton didn’t bear the recoil of the weapon at all. So in real life, had he fired it, the gun’s recoil would have broken his shoulder and wrist. However, had the weapon been directly braced against his exoskeleton, the force would have been transmitted directly into it, and not his body.
There will still be text-based computer interfaces. Throughout the film, characters eschew GUI’s and instead use simple, text-based computer interfaces that resemble MS-DOS. For certain applications, these will still be used in 2154 since they’re optimal. However, reading characters off screens will be unnecessary in most cases since brain implants will let humans instantly “feel” and “know” what the computer wants to tell them, and vice versa. Intelligent machines themselves will be able to wirelessly interface with technology even more directly and easily.
Text-on-screens will, along with devices that operate on purely mechanical principles, probably exist as backups to more sophisticated technology. For example, imagine a wristwatch that can wirelessly transmit the time to your brain implant so you can know with a single thought what time it is. The wristwatch would still have a face with a small LED screen, which you could look at to see what time it was in case the wireless chip in the watch broke.
Shoulder-launched missiles launched from Earth will be able to fly thousands of miles into space. There’s a scene early in the film where a group of illegal immigrants gets into small space ships and flies from L.A. to Elysium. Inexplicably, Elysium lacks the weapons to blow up the ships or at least disable them before reaching the station, so the only way to stop them is to have Kruger shoot them down with surface-to-air missiles. Using a shoulder launcher, he fires several missiles that have enough power to exit the Earth’s atmosphere, overtake the space ships and destroy them. Since the station orbits about 4,000 miles above Earth, the ships were also thousands of miles up when they were destroyed.
No chemical fuel can contain enough energy to propel a small missile that far and fast. The only way such a thing MIGHT be possible is if the missiles had mini nuclear fusion engines, which may or may not be feasible, even with the highest possible level of technology. By 2154, I doubt such weapons will exist.
Helicopter-sized craft will be able to fly back and forth between the Earth’s surface and space. It takes an enormous amount of energy to defeat gravity and to put something into space. Case in point: A 300 foot tall rocket is needed just to put something the size of a large van into orbit. In the film, the van-sized object doesn’t need the huge rocket anymore–four small engines and a small fuel tank can do it.
I think this is probably impossible. The closest we might get is passenger jet-sized craft flying into space with four or five people inside. For a more detailed discussion, see my Starship Troopers review.
Today’s guns will still be in use. At several points in the film, people are shown carrying contemporary guns like AK-47’s and M-16’s. These are used in gun battles with cutting-edge soldier robots and expert mercenaries. By 2154, few of the firearms existing today will still be in use since they will have all long worn-out and been shredded for scrap metal. Guns, like anything else, gradually wear out with use and at some point become dangerous to fire and not worth fixing.
However, the basic DESIGNS for guns are timeless. From a mechanical engineering standpoint, guns like the AK-47 and M-16 are optimized for what they do, and there’s no way to significantly improve upon them. So in 2154, newly manufactured AK and M-16 descendants could still represent the cutting edge of small arms technology.
Certainly they’ll still be effective at killing humans since our skin isn’t evolving to become bulletproof, and even armored machines could still be killed with enlarged versions of those guns designed to fire stronger bullets. However, while the internal mechanics will be conserved, future guns will look at least a little different on the outside.
Personal energy shields that can stop bullets will exist. Kruger has a pocket-sized device that, when activated, creates a semi-transparent, circular shield in front of him. It only lasts a few seconds, but it can block a hail of bullets, even from the super-powerful CHEMRAIL gun.
This is scientifically implausible. There’s no intangible force that could be harnessed to make moving objects with large amounts of kinetic energy instantly stop in midair, as if they’d hit a solid object.