A U.S. soldier in 1944 holding a captured German “Panzerschreck” (left) and an American Bazooka (right). Both weapons were designed to penetrate the thick armor of tanks and to blow them up, but the larger Panzerschreck was more powerful. The Germans based the Panzerschreck on Bazookas they captured from U.S. POWs in 1942 in North Africa.
A DeepMind LLM secured a silver medal in the 2024 International Math Olympiad. “The fact that the program can come up with a non-obvious construction like this is very impressive, and well beyond what I thought was state of the art.” -Prof Sir Timothy Gowers, IMO gold medalist and Fields Medal winner https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/
After his shockingly bad debate against Donald Trump, President Joe Biden ended his 2024 reelection bid. At least two pundits predicted this:
“I predict the kingmaker who saved Biden’s campaign in the 2020 primary, South Carolina Congressman Jim Clyburn, will this time be the king slayer. After Biden insists that he is running for re-election, Clyburn, a respected elder statesman with gravitas, will tell Biden his defeat of Trump in 2020 was enough, and now it’s time for another candidate, without Biden’s baggage, to lead the way. Biden, a stubborn man, will eventually agree. By spring 2023, more than 20 Democrats will enter the contest and the party in 2024 will emerge with a newer, younger leader.” -Ari Fleischer, May 2, 2022 https://www.foxnews.com/opinion/joe-biden-wont-run-in-2024-ari-fleischer
‘Steve Forbes made a bold prediction Friday morning while sitting with Fox News’ Bill Hemmer and Jacqui Heinrich on “America’s Newsroom”: President Joe Biden will not be the Democrats’ 2024 nominee, despite announcing his reelection campaign in April.’ -May 26, 2023 https://www.yahoo.com/entertainment/joe-biden-not-democrats-2024-231657505.html
‘Musk, 53, has directed SpaceX employees to drill into the design and details of a Martian city, according to five people with knowledge of the efforts and documents viewed by The New York Times. One team is drawing up plans for small dome habitats, including the materials that could be used to build them. Another is working on spacesuits to combat Mars’ hostile environment, while a medical team is researching whether humans can have children there. Musk has volunteered his sperm to help seed a colony
…The Boring Co., a private tunneling venture founded by Musk, was started in part to ready equipment to burrow under Mars’ surface, two of the people said. Musk has told people that he bought the social platform X partly to help test how a citizen-led government that rules by consensus might work on Mars. He has also said that he envisions residents on the planet will drive a version of the steel-paneled Cybertrucks made by Tesla, his electric vehicle company.’ https://www.yahoo.com/news/thermonuclear-blasts-species-inside-elon-114134734.html
A new study finds evidence that the brain patterns involved in gender identity are distinct from the brain patterns that reveal biological sex at birth. The study used fMRI brain scans of children aged 9 and 10, along with survey questions meant to find how they viewed their gender. “These gender-associated brain networks were distinct from those associated with assigned sex at birth.” https://www.science.org/content/article/brain-imaging-study-children-shows-sex-and-gender-operate-different-networks-brain
New genetics research shows that Americans with ADHD and anxiety/depression disorders have been reproducing the most, while people with high educational attainment have been reproducing the least.
It’s the summer of 2024, and the world is in crisis. Twenty years of rising international tensions and competition for dwindling oil have split the strongest countries into two blocs: the Euro-American “Coalition” and the Sino-Russo “Red Star Alliance.” You are the leader of an elite American special forces squad fighting under the banner of the Coalition, and over the course of the video game, you’ll lead your men from the oil fields of Turkmenistan all the way to the heart of Moscow as your side fights to capture the remaining oil reserves and end the Russian threat once and for all. In your missions, you use futuristic guns and drones, and command weapons of war like jeeps, tanks, and helicopters to destroy the enemy. Not even nuclear strikes can stop you. It’s victory…or nothing!
THAT is the awesomest recap of the 2008 first person shooter game Frontlines: Fuel of War that I can muster, and I hope it grabbed your attention because the game actually wasn’t so epic. Putting aside the scarily evocative storyline, it was a paint-by-the-numbers FPS game with generic weapons, the occasional combat vehicle for you to commandeer, and mediocre AI enemies. Anyone who played Halo 2, which was released four years before this, will recognize all the same game elements.
Frontline’s missions are not imaginative and you don’t need any real tactics to beat them: Rely on your ability to absorb inhuman amounts of lead and keep blasting until all the bad guys are dead. The game has Black Hawk Down / Iraq War vibes, which is understandable given the time when it was made. I don’t have a good memory for this, but the graphics were probably above average for 2008.
Of course, I’m not reviewing Frontlines for its qualities as a video game; instead, I want to examine how well it predicted the future–which is now our present time–16 years ago. For better or worse, video games are a hugely popular medium that shapes global culture and how even our views of what the future will be like. The game is a work of science fiction since it’s set in the then-future and features technologies that didn’t exist yet, and like a typical work of this sort, it’s a time capsule that shows what the anxieties of its moment in history were.
The game was released in February 2008, near the height of an alarming, multi-year spike in the price of oil and only a year after the Iraq War–which some claimed was a secret oil grab perpetrated by U.S. leaders who had insider knowledge that Peak Oil was nigh–hit its bloody climax. Fears were widespread that oil would just keep getting more expensive and that the root cause was a global shortage. In fact, it proved to be a temporary problem caused by Saudi Arabia’s failure to pump more oil out of the ground to keep pace with rising global demand (particularly from China). This led to a temporary imbalance between supply and demand, which caused the 2004-08 global price spike. The U.S. occupation of Iraq also ended without the latter turning into an oil-producing colony of the former.
It’s important to keep the failures of works like Frontlines: Fuel of War in mind when contemplating how today’s science fiction films, books, TV shows, and games depict the future. The common themes in such recent works are American decline and internal strife (Civil War, The Forever Purge), rise of a fascistic American dictatorship (The Handmaid’s Tale, The Creator), the masses suffering under the cruel yoke of megacorporations and the rich (Snowpiercer), and disastrous climate change (also Snowpiercer). If you take anything away from this essay, let it be a strong skepticism of whatever future doomsday movie or book makes the rounds next.
Analysis:
The world is nearly out of oil. In the game, the world hit “Peak Oil” shortly after 2008 and oil production collapsed over the next few years. By around 2020, oil had become so expensive due to its scarcity that even rich countries like the U.S. were afflicted with chronic electricity, food and water shortages. The in-game reporter character who accompanies the Coalition unit even says at one point that mass riots had become common in U.S. cities, and hundreds would die in the disorder in one night. By 2024, the only remaining oil wells on Earth are in Central Asia, and the world’s major powers are so desperate to control it that they start WWIII over it. Obviously, none of this happened.
What saved us? Hydraulic fracking, an advanced method of recovering oil from underground deposits, which was pioneered in the U.S. It sharply increased the country’s oil output over the 2010s. By 2018, America was the world’s biggest oil producer, and it has held that title ever since. More than any other factor, the advent of fracking has kept oil cheap globally since 2008. The biggest pie in Frontline’s face is the fact that oil prices are actually much LOWER in 2024 than they were when the game was released, and that Peak Oil DEMAND could happen as early as 2030 thanks to the rise of electric cars and solar power.
But even if global oil production had peaked in 2008, output levels never would have fell as sharply as they did in the game: the collapse was so total that just 16 years later, Turkmenistan was the only country with oil left (in fact, it is actually not even one of the top 10 oil producers in the world today). In reality, the decline would have been much more gradual, and the world would have largely compensated by using more coal and natural gas (and in some countries, greater use of nuclear power). Instead of mass blackouts and nightly, murderous mayhem, America would be swept by mass complaining and people having to make do with slightly smaller houses and cars. Likewise, the world’s major nations wouldn’t be so desperate for energy that they’d be willing to start WWIII with each other to get it.
A pandemic happened in recent memory. Though only spoken of briefly in the game, an avian flu pandemic swept the world in 2009. The game’s narrator was a youth at that time, and he mentions that his parents withdrew him from school because they couldn’t get him a vaccine. This was partly accurate: the COVID-19 virus outbreak started in 2019 and, among its many ill effects, forced closures of schools across the world.
Russia and China have formed a military alliance. The bad guys in the game are the “Red Star Alliance,” a military pact between Russia, China and a few smaller countries that border them. While Russia and China have closer relations than they did in 2008, it owes to shared hostility towards and exclusion by the West and not to any fondness of each other, and there is no mutual defense component to it.
China views Russia’s invasion of Ukraine as a mistake and a potential flashpoint for a larger war that China would gain nothing from. As such, China has refused to sell Russia weapons for use in Ukraine, though it has provided large amounts of other goods (microchips, jet engines, etc.) that Russia used to build weapons of its own. Given the different temperaments and strategic priorities of the countries’ leaders, it is highly unlikely they will form a mutual defense arrangement unless there’s a major change to the global order. They don’t want to get dragged into the other’s wars: Russia doesn’t want to fight against Taiwan and China doesn’t want to fight against Ukraine.
U.S. troops don’t use the M-16 series rifle anymore. The Coalition troops that we see all have American accents and use a smoothly contoured, plasticky rifle that resembles the aborted “XM-8.” This means the U.S. military has abandoned the M-16 series as its standard rifle. This hasn’t happened, and the XM-8 was canceled before entering service because, though it was slightly better than the M-16 series in some ways, the advantage was not so great that it justified the cost of replacing millions of the older rifles.
There are now plans to replace the M-16 series with a heavier, more powerful rifle called the “XM-7,” but I’m skeptical the plan will be carried to completion and instead expect it will find a role as a specialist weapon.
All infantrymen, including the Russians and Chinese, have holographic eyepieces. Every soldier seen in the game has a square, holographic eyepiece jutting down from the bottom of his helmet rim and over one eye. Coalition eyepieces glow blue while Red Star eyepieces glow red, presumably because the two sides have an agreement to differentiate themselves according to who is good or evil. It’s unclear what the eyepieces display over their wearers’ fields of vision, though a fair guess would be the overhead battlefield map with objectives and enemy positions highlighted that the player sees at the top of the screen.
While augmented reality eyewear keeps making appearances at military trade shows across the world, and all modern militaries have some program dedicated to evaluating them, they are not in common field use. A notable exception to this is short-range drone pilots, many of whom wear virtual reality goggles to remotely fly their craft. However, they don’t wear those goggles when engaged in rifle combat with the enemy like in the game.
Rifle scopes are much more common and more advanced than they were in 2008, and duplicate one aspect of the game’s eyepieces: when looked through, the scopes show glowing reticles over the shooter’s field of view, indicating where their bullets will hit. This makes target acquisition faster and more accurate, and the scopes have become standard equipment in several major militaries. In that sense, “augmented” or “holographic” visioning devices are common on the battlefield in 2024.
There are hand-launched attack drones. In the game, you can launch handheld, hovering drones that you then remotely pilot to enemy targets whereupon you detonate them. They are small enough to fly through open windows and hallways and are best suited for attacking fortified positions like machine gun pillboxes. A drone’s explosive load is about the same a grenade. This is probably the game’s most important and prescient prediction about 2024.
The Ukraine War has seen mass use of drones by both sides. This includes countless, small quadcopter drones that closely resemble those in the game. Some are kamikazes that are sacrificed upon use while others are reusable and drop mini-bombs. They’re so effective and cheap that they’re commonly used to hunt down lone infantrymen and don’t have to be reserved just for valuable targets like tanks. If anything, the game UNDERestimated how pervasive and transformative aerial drones would be on the 2024 battlefield.
There are small ground drones. However, the game’s prediction that small ground drones would be in common use has failed for several reasons. First, small vehicles with little wheels and low ground clearances can’t negotiate the uneven terrain found on typical battlefields: a barbed wire fence, log, or pile of rubble that a human could easily step over could be an impassable barrier to mini-tank the size of a coffee table. Sizing them up to overcome these issues results in them no longer being small enough for infantrymen to carry into the field. Second, since ground vehicles move slowly and basically in just two dimensions, they’re easy targets for enemy troops (contrast this with aerial drones, which can move fast and in three dimensions). This means they’re less survivable and might need some kind of armor, adding to their cost and bulk. Third, small ground drones are expensive because they require more material for their manufacture than flying drones. Above a certain unit price point, it doesn’t make sense to use them sacrificially like you can with aerial drones.
There’s a particularly unrealistic moment in the game where you use a skateboard-sized, remote controlled suicide drone to drive under an enemy tank and blow it up. Again, this would only work if the route to the tank were over flat, hard ground with no debris in the way, which you would never count on being the case in combat. The real 2024 solution would be to use a shoulder-launched missile or a small aerial kamikaze drone loaded with a shaped charge explosive. Those missiles and drones can also target the thin armor on the top sides of tanks, which is almost as vulnerable as the belly armor that a skateboard drone’s explosion would tear into.
That said, future advances in robotics will eventually fix the problem: small ground robots with legs instead of wheels would be able to quickly negotiate difficult terrain and attack other ground targets. This draws inspiration from history: during WWII, both sides experimented with bomb-laden dogs that were trained to run across the battlefield, dive under enemy tanks and then explode. While the dogs were fast and nimble enough to do it, problems like the animals being spooked by gunfire foiled its viability. It will surely take decades, but dog-like robots will become a reality, and I’m sure they’ll have combat niches, but can’t say whether they will be preferred to other kinds of futuristic weapons for specific tasks like destroying tanks.
Russian troops are bad at fighting. From the start of the game, in every mission where you fight Russia, you do nothing but drive them back. For a country with such a fearsome reputation, this seems paradoxical, but it actually isn’t: The ongoing Ukraine War, the first Chechen War, the first year of WWII, and the Russo-Finnish War bear out the fact that the Red Army fights poorly (sometimes disastrously so) when the stars align in the wrong way. Though Russians are more courageous and brutal than average on the battlefield and have great skill improvising, poor training, bad leadership, and supply shortages perennially undermine their overall performance. The problem gets worse when the war involves a place and an objective that average Russians don’t care about.
Russia’s military reputation has taken a major hit due to its poor performance in Ukraine since 2022: appalling losses have forced it to fall back on antiquated weapons drawn from Soviet stockpiles and on convict troops and paid foreign mercenaries. The Russians have made strategic blunders, and on the battlefield rely on uncreative tactics (mostly wearing down the Ukrainians with mass artillery strikes and frontal attacks with infantry). Aside from their tenacity, there’s little to be impressed with, and in a direct conventional war with U.S. troops like the “Coalition” team you lead in the game, the Russians would badly lose in peripheral places like Central Asia. However, they would fight much harder inside Russia itself, as it is their sacred homeland.
Russia used nuclear weapons to defend itself from land invasion. After beating up the Russians in Central Asia, the Coalition decides to keep going with a land invasion across the Kazakhstan border into Russia itself, with the objective of conquering the latter. This makes little sense since the Coalition had already accomplished its goal of capturing the last remaining oil well in the world, and since an organization composed of democratic Western governments would never behave so recklessly. The response is predictable: Russia launches nuclear missiles against the Coalition armored force, causing major damage to it. (That mission is the most stunning in the game as it involves you fighting a tank battle punctuated by nearby nuclear explosions)
Thankfully, no one has tried invading Russia since 1941, so it has never used nuclear weapons in self-defense. And let there be no doubt they would: Russia clearly states in its defense doctrine that it will use nuclear weapons if its territory is threatened. The game’s depiction of how this would play out is accurate: Instead of launching an all-out nuclear attack against all Coalition’s cities, Russia started by only using smaller, tactical nuclear weapons against the Coalition’s military forces that were crossing the border, and in a remote area with few or no civilians. This wasn’t mentioned in the game, but it would surely be preceded by top-level warnings from Russia to the Coalition governments about what was coming.
I think Russia, the U.S., and China are the world’s three “unconquerable countries” because of their sheer size and nuclear arsenals. The armies of other countries might be able to defeat them on foreign soil, but it would be hopeless to invade any of the three in an attempt to take them over since too many troops would be needed and they have enough nuclear weapons to annihilate any attacker. The final mission of the game is the storming of downtown Moscow, and in it, mushroom clouds are visible in the distance, meaning Russia has been using nuclear weapons against Coalition troop concentrations during their travels through its territory. I can’t fathom how any army could survive repeated nuclear attacks like that, nor do I see how the home fronts in the Coalition countries would avoid falling into chaos over widespread panic that Russia would nuke them at any moment as well.
Big tank battles are happening in Europe. As mentioned, the Coalition invasion of Russia is spearheaded by a large number of tanks. In the first invasion mission and subsequent ones set deeper in Russia, there are instances where your character must command a tank and fight with Russian tanks. To the surprise of people in 2008, this turned out to be accurate.
The Ukraine War has seen many tank battles since 2022, with a series of particularly large ones happening in early 2024 for control of the town of Avdiivka. Up to this point in the War, 17,168 of Russia’s armored vehicles have been destroyed and 2,925 captured by Ukraine.
China has conquered Taiwan. The game focuses on the European theater of the war, so almost all of the combat is against Russian troops. Midway through the game, it is mentioned that China invaded and quickly took over Taiwan. Thankfully, this didn’t happen, so Frontlines: Fuel of War can be added to the enormous trash heap of sources that have wrongly predicted such an invasion since at least the 1980s. Additionally, the insinuation that Chinese ground troops could easily take over the island is almost certainly wrong: while China’s army is massive, its amphibious forces are small, which creates a major bottleneck for getting its troops across the Taiwan Strait and providing them with supplies.
U.S. attack subs lurking underwater and long-range antiship missiles fired from Taiwan and by U.S. warplanes might fatally damage a Chinese landing fleet before it reached the beaches. More generally, marshalling a naval fleet for a D-Day scale invasion is sure to be an extremely risky and high-casualty endeavor in today’s age of 24/7 spy satellite surveillance and long-range precision missiles. While the world has been primed to expect a future Chinese invasion of Taiwan to be an inevitable and unstoppable juggernaut, it could actually be the most legendary naval defeat since the loss of the Spanish Armada.
Russian defense missiles intercepted U.S.-made missiles that Ukraine fired at Crimea. The midair explosions from the two groups of missiles colliding hurled shrapnel down onto a packed beach, killing five Russian civilians and injuring more.
The Ukrainians almost certainly weren’t targeting civilians, and their missiles were probably headed for Russian warships or bases. Nevertheless, Russia has sworn revenge. https://www.bbc.com/news/articles/c6pppr719rlo
Russia now has three captured M1 Abrams tanks. Each one is damaged in a different way. I bet their working parts could be combined to make one working tank. https://youtu.be/yBhYcMb8Tng?si=kYf17lu3a_eyjV2e
The carousel autoloader found in Soviet and Russian tanks isn’t necessarily a fatal design flaw. If the Russians copied the gunpowder from the advanced ammunition they found in the captured German Leopard 2 tank, then their own tanks would become much less likely to blow up thanks to their own ammo cooking off. https://youtu.be/6A4CqxGMBQw?si=DUc_wyPlbWI7QgHv
The third variant of the British WWII Sten sub machine gun was one of the simplest and cheapest guns ever made. It’s interesting to see how those factors hurt its reliability and longevity, even compared to other Sten variants. The Mark III was truly a throwaway weapon. https://youtu.be/W0qlOOE8G_k?si=LNbw1XOwYr5urr5A
The man who invented a small add-on device that turns any Glock into a full-auto weapon is a mechanical genius from Venezuela. He created the first device in the late 1980s while he was still a teenager and working at a gun shop. Only in recent years has the device started becoming common among criminals. https://www.yahoo.com/gma/feel-terrified-inventor-glock-switch-090429902.html
Nvidia briefly became the world’s most valuable company, with a market cap over $3.4 trillion. It was worth $418 billion just two years ago. https://www.bbc.com/news/articles/cyrr40x0z2mo
Without using the term “singularity,” math whiz and high-ranking OpenAI staff member “Leopold Aschenbrenner” published a paper claiming that milestone is upon us. Radical changes thanks to AI will happen in just the next ten years. https://situational-awareness.ai/
‘In February, AI-based forgery reached a watershed moment–the OpenAI research company announced GPT-2, an AI generator of text so seemingly authentic that they deemed it too dangerous to release publicly for fears of misuse. Sample paragraphs generated by GPT-2 are a chilling facsimile of human- authored text. Unfortunately, even more powerful tools are sure to follow and be deployed by rogue actors.’ https://hbr.org/2019/03/how-will-we-prevent-ai-based-forgery
According to the 1999 movie The Thirteenth Floor, by June 21, 2024 we were supposed to have had AGI, full immersion virtual reality like The Matrix, lifelike digital worlds, and really cool-looking glass skyscrapers in L.A. https://youtu.be/UCsR9iPvX0I?si=iLM31LuQMMcC5Q1u
This prediction was accurate:
“I think Joe Biden will run again in 2024 and I think he will run against someone with the last name ‘Trump.’ I do not know whether that is Trump or Trump Jr…”
There’s a reason why the classic Spielberg movies have slightly “off” colors that make them look old fashioned in a subtle way: the film stock used back then had a limited color range. It would be interesting to use AI to reverse those distortions and create rereleases of those films that are true to the actual colors on set. https://youtu.be/kQmIPWK8aXc?si=7L2nPuUZ9aaVU_EM
Some forest fires are actually caused by bacteria. Just like a human-made compost pile, underground peat deposits can get extremely hot due to the metabolisms of bacteria that inhabit and eat them. Furthermore, sudden jumps in surface temperature caused by heat waves can cause the bacteria to raise the peat temperature by even more. The result is spontaneous combustion. https://theconversation.com/zombie-fires-in-the-arctic-smoulder-underground-and-refuse-to-die-whats-causing-them-221945
‘Breeders also value posture, hoof solidity, docility, maternal ability and beauty. Those eager to level up their livestock’s genetics pay around $250,000 for an opportunity to collect Viatina-19’s egg cells.
And there are other ways the Drake Equation could be tweaked to result in humans being the only intelligent species in the galaxy. ‘According to Stern and Gerya, it’s likely quite rare for planets to have both continents and oceans along with long-term plate tectonics, and this possibility needs to be factored into the Drake Equation.’ https://gizmodo.com/drake-equation-update-fermi-paradox-intelligent-life-1851503974
Researchers have discovered a gene that causes obesity in some people. Genetic engineering and new medical interventions will end the global obesity problem in the future. The average person will be taller, thinner, stronger, and healthier. https://www.cnn.com/2024/06/20/health/obesity-genetic-wellness/index.html
It won’t be long before you’ll be able to feed a computer a script or the text of a book, and it will be able to produce a professional-quality audiobook or film. It would be so fascinating to finally see the great, unmade movies (like Stanley Kubrick’s epic biopic about Napoleon) or to see movies that stayed true to their written source material so they could be compared with what was actually made. Jurassic Park comes to mind as a famous movie that diverged greatly from the book. Imagine the same, CGI-generated characters in the same island setting, with the same style of soundtrack and cinematography, but with different dialog and different plot points than happened in the film we all know.
Will RV living and houseboat living be the norm in the future? Think about it: If humans won’t have jobs in the future, then they won’t have enough money to buy houses, making RVs and boats the only affordable option. Even a bus-sized recreational vehicle is only 1/3 the price of a typical American home, and a houseboat with the same internal volume is 2/3 the price. Also, without jobs, humans would have much less of a reason to stay tethered to one location and could indulge in their wanderlust. Additionally, thanks to VR being more advanced, people won’t need large TVs or computer monitors, easing the need for spacious living rooms.
Humans talking about the need to control AGI to ensure our dominance is not threatened are like Homo erectus grunting to each other about the need to keep Homo sapiens down somehow. It’s understandable for a dominant species to want to preserve its status, but that doesn’t mean such a thing is in the best interests of civilization.
It’s still unclear whether LLMs will ever achieve general intelligence. A lot of hope rests on “scaffolded systems,” which are LLMs that also have more specialized computer apps at their disposal, which they’re smart enough to know to use to solve problems that the LLM alone can’t.
Part of me thinks of this as “cheating,” and that a scaffolded system would still not be a true general intelligence since, as we assigned it newer and broader tasks, it would inevitably run into new types of problems it couldn’t solve but humans could because it lacked the tool for doing so.
But another part of me thinks the human brain might also be nothing more than a scaffolded system that is comprised of many small, specialized minds that are only narrowly intelligent individually, but give rise to general intelligence as an emergent property when working together (Marvin Minsky’s “Society of Mind” describes this). Moreover, we consider the average human to be generally intelligent even though there are clearly mental tasks that they can’t do. For example, through no amount of study and hard work could an average IQ person get a Ph.D in particle physics from MIT, meaning they could never solve cutting-edge problems in that field. (This has disturbing implications for how we’ve defined “general intelligence” and implies that humans actually just inhabit one point in a “space of all possible intelligent minds.”) So if an entity’s fundamental inability to handle specific cognitive tasks proves they lack general intelligence, then humans are in trouble. We shouldn’t hold future scaffolded systems to intelligence standards we don’t hold ourselves to.
Moreover, it’s clear that humans spend many of their waking hours on “mental autopilot,” where they aren’t exercising “general intelligence” to navigate the world. An artificial mind that spent most of its time operating in simpler modes guided by narrow AI modules could therefore be just as productive and as “smart” as humans in performing routine and well-defined tasks.
In spite of these heavy losses, Russia has so many weapons left over from the Soviet era that it won’t run out of them, even at current loss rates, for two or three years. As the shortages near the critical threshold, I predict Russia will make up for it by starting to import old Soviet and Soviet-compatible weapons from friendly countries like North Korea. https://www.iiss.org/online-analysis/military-balance/2024/02/equipment-losses-in-russias-war-on-ukraine-mount/
Missiles and artillery fired from within Russia have been hitting targets inside of Ukraine. The Ukrainians have Western-made weapons with the ranges to destroy those Russian sites, but donors like the U.S. refuse to let Ukraine use them against Russian territory for fear it will lead to an expansion of the fighting. There’s a growing consensus among Western leaders that they should ease the rule and let Ukraine use their weapons to attack Russian soil. Putin is warning that this would lead to “serious consequences.”
President Biden has said he will withhold some military aid to Israel if it sends ground troops into the last Palestinian-controlled city in Gaza, Rafah. There are widely held fears that such an operation would kill large numbers of civilians. https://www.cnn.com/2024/05/08/politics/joe-biden-interview-cnntv/index.html
Israeli troops seized control of Gaza’s land border with Egypt. They claimed it was necessary to shut down secret tunnels that were being used to smuggle things across the border. https://www.bbc.com/news/articles/c1994g22ve9o
Peter Zeihan’s dour 2019 predictions about the future of China’s economy have proven accurate:
“So I would argue that fixing this [by] deflating the bubble, I think that I think that ship sailed 20 years ago, and so the question becomes ‘is this triggering going to be internal or external?’ Let’s start with internal. Demographically, we are going to be seeing a contraction in Chinese domestic economic activity simply because of demographics within the next five years.”
An economic “contraction” doesn’t necessarily mean negative growth; it can mean a sharp decrease in the positive growth rate. For example, if my personal income rises by $5,000 per year, but then one year the growth rate shifts down to only a $1,000 increase each year, in economic terms I’ve experienced a contraction. China’s GDP growth rate and domestic spending growth rate are both way down from where they were in 2019 when Zeihan made his prediction.
OpenAI unveiled it’s latest and most advanced LLM, “GPT-4o”. At the demo, the machine was able to carry on a conversation with a human presenter in a totally natural and intelligent manner. https://www.youtube.com/live/DQacCB9tDaw?si=GPXXv9mHoh5NcA1d
Actress Scarlett Johansson claimed OpenAI had cloned her voice without her permission to synthesize GPT-4o’s voice, and quickly sued the company. Though they say they didn’t break the law and used a different human to create the voice, OpenAI nonetheless disabled the voice feature indefinitely. Scarlett Johansson famously voiced “Samantha,” a sentient AI character in the 2013 movie Her. https://www.foxbusiness.com/entertainment/openai-accused-mimicking-scarlett-johansson-tech-company-pauses-chatgpt-voice
GPT-4 has passed the five-minute Turing Test. “GPT-4 was judged to be a human 54% of the time, outperforming ELIZA (22%) but lagging behind actual humans (67%).” https://arxiv.org/abs/2405.08007
A large number of AI safety staff quit OpenAI nearly at once. While NDA’s prevent most of them from talking about it, people in the know say they were unhappy with Sam Altman’s dishonesty and lack of commitment to their mission. https://www.lesswrong.com/posts/ASzyQrpGQsj7Moijk/openai-exodus
‘AI might wreak havoc on traditional studio moviemaking, with its massive budgets and complex technical requirements. But in the process, it is likely to make high-quality filmmaking much less expensive and logistically arduous, empowering smaller, nimbler, and less conventional productions made by outsiders with few or no connections to the studio system.’ https://reason.com/2024/05/25/ai-is-coming-for-hollywoods-jobs/
An important lesson from the last few years is that job automation will sweep across the workforce in unexpected ways. For example, no one believed jobs involving artistry would be automated before jobs involving simple physical labor, like flipping burgers. It might prove more profitable for companies to replace their leaders with AIs sooner than they replace their assembly line workers.
Regardless, keep in mind there’s probably no limit to how far job automation can go. In 50 years, if you’re part of that lucky 1% of the adult human population that still has a “real job,” don’t gloat at the unemployed masses because it will only be a few years before your position is also taken by a machine. https://www.nytimes.com/2024/05/28/technology/ai-chief-executives.html
Here’s a very fascinating case study of a young Mexican man who was born deaf and whose parents never taught him sign language. As a result, he never developed any kind of linguistic ability and had a totally different way of thinking (he lacked “symbolic thinking” and couldn’t conceive of attaching names to objects) and dealing with people. After illegally immigrating to the U.S., a linguist stumbled upon him and slowly taught him sign language.
‘As part of her discussion of the human rights of the deaf, Schaller makes the argument, familiar also from Benjamin Whorf (and also brought up in the commentary on Henrich’s WEIRD article) that language diversity itself is an insight into human cognitive diversity: ‘Every language is an outcome of how the human brain works. We don’t know how much we can do with our one brain, even, and each language has used the brain in a slightly different way.’ However, there’s an even deeper and more profound cognitive diversity in her discussion of Ildefonso: the possibility of language-less human thought, something that theorists like Merlin Donald have attempted to discuss.’ https://neuroanthropology.net/2010/07/21/life-without-language/
Something that makes no sense in Star Wars and many other space movies is the inability of spacecraft to quickly point in any direction to bring their guns to bear on the enemy. Usually there’s a good guy fighter plane being pursued by a bad guy fighter plane, and the good guy yells out “I can’t see him because he’s behind me! Help!”
In reality, since there’s no air resistance to deal with in space, the good guy could instantly flip his fighter plane around and shoot the bad guy. You see two examples of that in the movie “Oblivion.” https://youtu.be/zRvXcyznOsQ?si=86sSlxUQHrvnw4Nc
In 1999, the Space Shuttle Columbia nearly suffered a catastrophe that would have forced it to attempt an emergency landing back on Earth right after it lifted off. https://youtu.be/qiJMdfj9NmI?si=g-PHc0zHoyTXtF0M
‘Overall, this is very impressive performance, although I should note that it is not up to the various headlines of “AlphaFold 3 Predicts All The Molecules of Life!” and so on. In almost every area it’s a significant improvement over anything that we’ve had before – including previous AlphaFold versions – and in some of them (protein-antibody and protein-RNA) it appears to be (for now!) the only game in town, even though it’s not an infallible oracle in those cases by any means.’ https://www.science.org/content/blog-post/alphafold-3-debuts
‘These results strongly suggest Neanderthal-derived DNA is playing a significant role in autism susceptibility across major populations in the United States.’ https://www.nature.com/articles/s41380-024-02593-7
There are many types of mental disability and they have many different causes. Among them are mutations to single genes. A new gene that causes it, RNU4-2, has just been discovered. 0.41% of mentally disabled people have the condition due to it.
Better knowledge of the human genome and cheaper prenatal DNA screening will let us reduce the population prevalence of mental and physical disorders in the future. https://pubmed.ncbi.nlm.nih.gov/38645094/
Sony has created a tiny robot that can do precise microsurgeries. In this video, it makes an incision in a corn kernel and then stitches it up. https://youtu.be/bgRAkBNFMHk?si=LmjjLDkwgHp4zbgp
My last blog entry, “What my broken down car taught me about the future,” has compelled me to write a new essay that shows how some of its insights will apply more generally in the future, and not just to cars and related industries. Due to several factors, manufactured objects will generally last much longer in the future, and sudden catastrophic failures of things will be much less common.
Things will be made of better materials
Better computers that can more accurately mimic the atomic forces and chemical reactions will be able to run simulations that lead to the discovery of new types of alloys and molecules. Those same computers will, perhaps with the aid of industrial and lab robots, also find the best ways to synthesize the new materials. Finally, the use of machine labor at every step of this process will basically eliminate labor costs, allowing the materials to be produced at lower cost than they could be with human workers today.
This means in the future we will have new kinds of metal alloys, polymers and crystals that have physical properties superior to whatever counterparts we have today. Think of a bulletproof vest that is more flexible and only half as heavy as Kevlar, or a wrench that is lighter than a common steel wrench but just as tough. And since machines will make all of these materials at lower cost, more people will be able to afford them and they will be more common. For example, if carbon fiber were cheaper, more cars would incorporate them into their bodies, lowering their weight.
Things will be designed better
In my review of the movie Starship Troopers, I discussed why the fearsome assault rifle used by the human soldiers was flawed, and why it would never come into existence in the future:
It wouldn’t make sense for people in the future to abandon the principles of good engineering by making highly inefficient guns like the Morita. To the contrary, future guns will, just like every other type of manufactured object, be even more highly optimized for their functions thanks to AI: Just create a computer simulation that exactly duplicates conditions in the real world (e.g. – gravity, all laws of physics, air pressure, physical characteristics of all metals and plastics the device could be built from), let “AI engineers” experiment with all possible designs, and then see which ones come out on top after a few billion simulation cycles. I strongly suspect the winners will be very similar to guns we’ve already built, but sleeker and lighter thanks to the deletion of unnecessary mass and to the use of materials with better strength-to-weight ratios.
That same computer simulation process will be used to design all other types of manufactured objects in the future. Again, as computation gets cheaper, companies will be able to run simulations to find the optimal designs for every kind of object. Someday, even cheap, common objects like doorknobs will be the products of billions of computer simulations that stumbled on the optimal size and arrangement of components through trial-and-error experiments with slightly different combinations.
As a result, manufactured objects will be more efficient and robust than today, but most won’t look different enough for humans to tell they’re different from today’s versions of them. The difference will probably be more apparent in complex machines like cars.
Things will be made better
Even if a piece of technology is well-designed and made of quality materials, it can still be unreliable if its parts are not manufactured properly or if its parts aren’t put together the right way. Human factory workers cause these problems because of poor training, tiredness, intoxication, incompetence, or deliberate sabotage. It goes without saying that advanced robots will greatly improve the quality and consistency of factory-produced goods, as they will never be affected by fatigue or bad moods, and will follow their instructions with perfect accuracy and precision. As factories become more automated, defective products will become less common.
Things will be used more carefully
As I noted in the essay about cars, most cars have their lifespans cut prematurely short by the carelessness of their owners. Gunning the engine will wear it out sooner, speeding over potholes will destroy shocks, and generally reckless driving will raise the odds of a car accident that is so bad it totals the vehicle.
Every type of manufactured object has engineering limits beyond which it can’t be pushed without risking damage. Humans lack the patience and intelligence to learn what those limits are for every piece of technology we interact with, and we lack the fine senses to always stay below those limits. While trying to unscrew the rusted bolt, you WILL put so much torque on the wrench that you snap it.
On the other hand, machines will have the cognitive capacity to quickly learn what the engineering limits are for every object they encounter, the patience to use them without exceeding those limits, and the sensors (tactile, visual, auditory) to watch what it’s doing and how much force it is applying. No autonomous car will ever overstress its own engine or drive over a pothole so fast it breaks part of the suspension system, and no robot mechanic will ever snap its own wrench trying to unscrew a stuck bolt. As a consequence, the longevity of every type of manufactured object will increase, in some cases astonishingly. The average lifespan of a passenger vehicle could exceed 30 years, and a simple object like a knife might stay in use for 100 years (until it had been worn down by so many resharpenings that it was too thin to withstand any more use).
Things will be maintained better
Even if you have a piece of quality technology and use it carefully, it will still need periodic maintenance. A Mercedes-Benz 300 D, perhaps the most reliable car ever made, still needs oil changes. Your refrigerator’s coils need to be brushed clean of debris periodically. You hand tools need to be checked for rust and hairline cracks and sprayed down with some kind of moisture protectant. All of your smoke alarms must be tested for function once a month. It goes on and on. If you own even a small number of possessions, it’s amazing to learn how many different tasks you SHOULD be undertaking regularly to keep them maintained.
Needless to say, few people take proper care of their things. Usually they didn’t read the user manual, memorize the section on maintenance, set automatic digital reminders to themselves to perform the tasks, and then rigidly follow them for the rest of their lives. So sue them, they’re only humans with imperfect memories, limited personal time, and limited self-discipline.
Once advanced robots are ubiquitous, these human-specific factors will disappear. Your robot butler actually WOULD know what kind of upkeep every item in your house needed, and it would do it according to schedule. Operating around the clock (they won’t need to sleep and could plug themselves into wall outlets with extension cords for indefinite duration power), a robot butler could do an enormous amount of maintenance work for you and could devote itself to truly minuscule tasks like hunting down and finding tiny problems you never would have known existed.
I’m reminded of the time I noticed a strange sound in the bathroom of my house that I seldom use. It was the toilet, and the water was flowing through it continuously, making a loud trickling sound. After removing the lid, I immediately saw the problem existed because the flush lever–which was made of plastic–had snapped in half, causing the flapper to jam in the open position.
Upon close inspection I noticed something else wrong: The two, metal bolts that held the toilet tank to the bowl were so badly rusted that they had practically disintegrated! In fact, after merely scraping the left bolt with my fingernail, it fell apart into an inky cloud of rust that spread through the water. It was a small miracle that the heavy tank hadn’t slid off already and fallen to the floor (this would have flooded the house if it had happened when I wasn’t home).
I went to the store, bought new bolts, a new flapper, and a new flush lever, and installed them. The toilet works like new, and its two halves are tightly joined again as they should be. Inspecting the inside of your toilet tank is another one of those things every homeowner should probably be doing once every X years, but of course no one does, and as a result, some number of tragic people suffer the disaster I described above. However, thanks to house robots, it will stop. And of course the superior maintenance practices will not be confined to households. All kinds of businesses and buildings will have robots that do the same work for them.
People also commonly skip maintenance because they lack the money for it. As I wrote in my essay about cars and the car industry, this will be less of an issue in the future thanks to robots doing work for free. Without human labor to pay for, the costs of all types of services, including maintenance, will drop.
Problems will be found earlier
A beneficial side effect of more frequent preventative maintenance will be the discovery of problems earlier. Putting aside jokes about scams, consider how common it is for mechanics to find unrelated problems in cars while doing an oil change or some other routine procedure. Because components often gracefully, rather than abruptly, fail, machines like cars can keep working even with a part that is wearing out (e.g. – cracked, leaking, bent). The machine’s performance might not even seem different to the operator. That’s why the only way to find many problems with manufactured objects is to go out of your way to look for them, even if nothing seems wrong.
Again, once robots are ubiquitous and put in charge of common tasks, they’ll do things humans lack the time, discipline, and training to do, like inspecting objects for faults. Once they are doing that, problems will be found and fixed earlier, making sudden, catastrophic failures like your car breaking down on the highway at night less frequent.
Repairs will be better
Just because you find a problem before it becomes critical and fix it doesn’t mean the story is over. Some catastrophic failures of machines happen because they are not repaired properly. As robots take over such tasks, the quality and consistency of this type of work will improve, meaning a repair job will be likelier to solve a problem for good.
Machines will be better-informed consumers, which will drive out bad products
My previous blog essay was about my quest to find a replacement for my old car, which had broken down. It was a 2005 Chevrolet Cobalt, which I got new that same year as a birthday present. Though I’d come to love that car over the next 19 years, I had to admit it wasn’t the best in its class. I drove it off the lot without realizing the air conditioner was broken and had to return a few days later to have it fixed. After a handful of years, one of the wheel bearings failed, which was unusually early and thankfully covered by the warranty. My Cobalt was recalled several times to fix different problems, most notoriously the ignition switch, which could twist itself to the “Off” position while the car was driving, suddenly locking the steering wheel in one position and leaving the driver unaware of why it happened (this caused 13 deaths and cost GM a $900 million class-action lawsuit, plus much more to fix millions of defective cars). Whenever I rented cars during vacations, I almost always found their steering and suspension systems to be more crisp and comfortable than my Cobalt, which felt “mushy” by comparison.
The 2005 Honda Civic was a direct competitor to my Cobalt, and was simply superior: the Civic had better fuel economy, a higher safety rating, better build quality, and the same amount of internal space. Since the Civics broke down less and used less gas, they were cheaper to own than Cobalts. When new, the Civic was actually cheaper, but today, used 2005 Civics actually sell for MORE than 2005 Cobalts! With all that in mind, why were any Chevy Cobalts bought at all? I think the answers include brand loyalty, the bogus economics of trading an old car for a new one, aesthetics (some people liked the look of the Cobalt more), but most of all, a failure to do adequate research. Figuring out what your actual vehicle needs are and then finding the best model of that type of vehicle requires a lot of thought and time spent reading and taking notes. Most people lack the time and skills for that, and consequently buy suboptimal cars.
Once again, intelligent machines won’t be bound by these limitations. Emotional factors like brand loyalty, aesthetics and the personal qualities of the salesperson will be irrelevant, and they will be unswayed by trade-in deals offered by dealerships. They will have sharp, honest grasps of what their transportation needs are, and will be able to do enormous amounts of product research in a second. Hyper-informed consumers like that will swiftly drive inferior products and firms out of the market, meaning cars like my beloved Chevy would go unsold and GM would either shape up fast or go bankrupt fast (which they actually did a few years after I got my car).
If companies only manufactured high-quality, optimized products, then the odds of anything breaking down would decrease yet more. Everything would be well-made.
In conclusion, thanks to all of these factors, sudden failures of manufactured objects of all kinds will become rarer, and their useful lives will be much longer in the future than now. This will mean less waste, fewer accidents, and fewer crises happening at the worst possible time.
A massive number of Iran’s missiles didn’t reach Israel because they malfunctioned and crashed. The confrontation showed the supremacy of Israeli and American weapons. https://youtu.be/COBDSmx9QDw?si=oa1JaWuy9OkyrnCB
However, the Soviet 1970s-era T-62 is now obsolete on the modern battlefield. Russia is using them in Ukraine anyway due to shortages of better tanks like the T-72. https://youtu.be/cJfvIOAs-2o?si=MKQ09SDfnwtCqVGi
Good Lord, these predictions from 2022 were totally wrong:
‘House prices in the United States — which rose during the pandemic by the most since the 1970s — are falling too. Economists at Goldman Sachs expect a decline of around 5%-10% from the peak reached in June through to March 2024.
James Cameron released remastered 4K versions of Aliens, True Lies, and The Abyss. He used new computer technology to radically sharpen the images by removing the grains of the 35mm filmstock and tuning the colors. I predicted this would happen, but not until the 2030s:
‘Computers will also be able to automatically enhance and upscale old films by accurately colorizing them, removing defects like scratches, and sharpening or focusing footage (one technique will involve interpolating high-res still photos of long-dead actors onto the faces of those same actors in low-res moving footage). Computer enhancement will be so good that we’ll be able to watch films from the early 20th century with near-perfect image and audio clarity.’ https://www.joblo.com/james-cameron-4k-restoration-defense/
We need an expert consensus on what tests a machine must pass to be deemed a “general intelligence.” Right now, there is no agreement, so a computer could be declared to be an “AGI” if it passed one set of tests favored by one group of experts while failing other sets of tests favored by others.
Famed philosopher Daniel Dennet died. He recently said this about the future of AI: ‘AIs are likely to “evolve to get themselves reproduced. And the ones that reproduce the best will be the ones that are the cleverest manipulators of us human interlocutors. The boring ones we will cast aside, and the ones that hold our attention we will spread. All this will happen without any intention at all. It will be natural selection of software.”‘ https://www.bbc.com/future/article/20240422-philosopher-daniel-dennett-artificial-intelligence-consciousness-counterfeit-people
Humans are so optimized for a narrow set of living conditions. As with space, intelligent machines will beat us to colonizing underwater regions.
‘Key problems include low temperatures, high pressure and corrosion. The change in gases – such as an increase in helium – also breaks electrical equipment and makes people feel cold; the Sentinel habitat will need to be heated to 32 degrees to make it feel like 21. High humidity also creates the potential for a lot of bacteria build-up, with people at risk of getting skin and ear infections, and the pressure also means people’s taste buds stop working – so those of the Sentinel will be eating food loaded with spices.’ https://www.yahoo.com/tech/inside-300-long-project-live-190000639.html
‘The Space Shuttle launching from Cape Canaveral in Florida (28.5° north of the equator) is a 0.3% energy savings compared to the North Pole. If we move it to around the equator, such as the European Space Agency’s spaceport in French Guiana, we’d get about 0.4% savings. Maybe that doesn’t seem like a big deal, but every bit helps.
During the last Ice Age, the planet wasn’t just colder, it was drier. Because so much water was locked up in the enlarged ice caps and glaciers, the atmosphere was drier and it rained less in the parts of the world closer to the equator. The deserts were larger than they are now and the rain forests were smaller. The equatorial regions were more clement to human life, but that wasn’t saying much. https://en.wikipedia.org/wiki/Last_Glacial_Maximum
1% of people have “extreme aphantasia,” meaning they can’t visualize ANYTHING in their minds. 6% of people have lesser degrees of aphantasia. 3% of people have “hyperphantasia,” meaning they can see mental images that are so vivid they can’t tell them apart from real images they’re seeing in front of them. https://www.bbc.com/news/health-68675976
‘“The textbooks say nitrogen fixation only occurs in bacteria and archaea,” says ocean ecologist Jonathan Zehr at the University of California, Santa Cruz, a co-author of the study. This species of algae is the “first nitrogen-fixing eukaryote”, he adds, referring to the group of organisms that includes plants and animals.’ https://www.nature.com/articles/d41586-024-01046-z
Brain scans that map the structure and activity of a brain can predict whether it belongs to a biological male or female with 99.7% accuracy. https://arxiv.org/abs/2309.07096
It’s interesting to look back on this essay by a Russian blogger from two years ago. The war hasn’t ended yet, but out of the six possible outcomes he forecast, #2 seems the likeliest right now. He predicted there was a 90% chance it WOULDN’T end that way.
2. Bloody slog, “draw” (10%) — Russia’s military tries for months, but proves simply unable to take and control Kiev. Russia instead contents itself with taking Ukraine’s south and east (roughly, the blue areas in the election map above) and calls it a win. In this case, western Ukraine likely later joins NATO. https://www.maximumtruth.org/p/understanding-russia
Russia has started making common use of “glide bombs” against Ukraine, which are devices that are added to dumb bombs to give them precision strike capabilities. After being dropped from a plane, they can glide as much as 40 miles to a target. https://youtu.be/ThNxRoDbuDE?si=9UR21d61L1jB7bDH
This video filmed by Russian sailors on another doomed ship, the Caesar Kunikov, shows them using machine guns to try fending off Ukrainian drones. They learn the same lesson that Allied bomber gunners and antiaircraft gunners learned in WWII: humans suck at shooting moving targets. https://youtu.be/oLOCGWn65T4?si=h2VAOW61JyX_cWZh
In WWII, the Electrolux home appliance company converted bolt-action rifles to semi-auto using the ugliest and most complicated setup I’ve seen. https://youtu.be/iMKwDHPkRLw?si=4nlY5V7MGnq9E2HW
Alcatraz has been digitally preserved after the island and all its structured were scanned using computers. As the costs of the technology drop, it will make sense to scan more places, until the whole planet has been modeled. https://www.nytimes.com/2024/02/28/us/alcatraz-island-3d-map.html
Since 2006, electricity demand in the U.S. has been flat overall, leading to hopes that we were on-track to decarbonize the economy by steadily reducing per capita electricity consumption. However, the recent, explosive growth in cryptocurrency mining and AI technology has led to the construction of more data centers, which consume huge amounts of electricity. The switch to electric cars is also putting more strain on the power grid in tandem with decreases to gasoline consumption. The latest projections show a 35% national increase in electricity demand between now and 2050.
Unfortunately, it’s unclear how well the supply side will be able to cope with this surge in demand. The construction of new power plants and power lines is made slow and expensive by government procedures, NIMBY people, and the need to acquire land and rights of way for it all. America’s greenhouse gas emission goals will also not be met due to this expansion of the electric grid. https://liftoff.energy.gov/vpp/
‘Because of these challenges, Obama Energy Secretary Ernest Moniz last week predicted that utilities will ultimately have to rely more on gas, coal and nuclear plants to support surging demand. “We’re not going to build 100 gigawatts of new renewables in a few years,” he said. No kidding.
The size of the investment and its timetable for completion suggest the goal is to create GPT-6 or an equivalent AI at the same pace that past versions of the GPT series have been released. https://www.astralcodexten.com/p/sam-altman-wants-7-trillion
“I would put media reporting at around two out of 10,” he says. “When the media talks about AI they think of it as a single entity. It is not.
“And when people ask me if AI is good or bad, I always say it is both. So what I would like to see is more nuanced reporting.” https://www.bbc.com/news/business-68488924
This NBER paper, “Scenarios for the transition to AGI”, was just published and contains very fascinating conclusions.
Using different sets of equally plausible assumptions about the capabilities of AGIs and constraints on economic growth, the same economic models led to very different outcomes for economic growth, human wages, and human employment levels. I think their most intriguing insight is that the automation of human labor tasks could, in its early stages, lead to increases in human wages and employment, and then lead to a sudden collapse in the latter two factors once a certain threshold were reached. That collapse could also happen before true AGI was invented.
Put simply, GPT-5 might increase economic growth without being a net destroyer of human jobs. To the contrary, the number of human jobs and the average wages of those jobs might both INCREASE indirectly thanks to GPT-5. The trend would continue with GPT-6. People would prematurely dismiss longstanding fears of machine displacement of human workers. Then, GPT-9 would suddenly reverse the trend, and there would be mass layoffs and pay cuts among human workers. This could happen even if GPT-9 wasn’t a “true AGI” and was instead merely a powerful narrow AI.
The study also finds that it’s possible human employment levels and pay could RECOVER after a crash caused by AI.
That means our observations about whether AI has been helping or hurting human employment up to the current moment actually tell us nothing about what the trend will be in the future. The future is uncertain. https://www.nber.org/system/files/working_papers/w32255/w32255.pdf
I don’t like how this promo video blends lifelike CGI of robots with footage of robots in the real world (seems deceptive), but it does a good job illustrating how robots are being trained to function in the real world. The computer chips (“GPUs”) and software engines like “Unreal” that were designed for computer games have found important dual uses in robotics.
The same technology that can produce a hyperrealistic virtual environment for a game like Grand Theft Auto 6 can also make highly accurate simulations of real places like factories, homes, and workshops. 1:1 simulations of robots can be trained in those environments to do work tasks. Only once they have proven themselves competent and safe in virtual reality is the programming transferred to an identical robot body that then does those tasks in the real world alongside humans. The virtual simulations can be run at 1,000x normal speed. https://youtu.be/kr7FaZPFp6M?si=2ujpWALvTi-Qfbxi
Sci-fi author and futurist Vernor Vinge died at 79. In 1993, he predicted that the technological singularity would probably happen between 2005 and 2030. https://file770.com/vernor-vinge-1944-2024/
The most exhaustive U.S. government internal investigation about secret UFO and alien programs has found nothing. Most if not all of the recent claims that the government has alien ships and corpses owe to a classified DHS program called “Kona Blue.” It was meant to prepare the government for recovery and analysis of aliens or their ships that ever came into its custody in the future. Kona Blue existed briefly and was then canceled.
These five behavioral traits are heritable, and form the basis of personal moral behavior, meaning morality is also partly heritable: Harm/Care; Fairness/Reciprocity; Ingroup/Loyalty; Authority/Respect; and Purity/Sanctity. https://journals.sagepub.com/doi/full/10.1177/08902070221103957
Ray Kurzweil recently appeared on the Joe Rogan podcast for a two-hour interview. So yes, it’s time for another Kurzweil essay of mine.
Though I’d been fascinated with futurist ideas since childhood, where they were inspired by science fiction TV shows and movies I watched and by open-minded conversations with my father, that interest didn’t crystallize into anything formal or intellectual until 2005, when I read Kurzweil’s book The Singularity is Near (the very first book I read that was dedicated to future technology was actually More Than Human by Ramez Naam, earlier in 2005, but it made less of an impression on me). Since then, I’ve read more of Kurzweil’s books and interviews and have kept track of him and how his predictions have fared and evolved, as several past essays on this blog can attest. For whatever little it’s worth, that probably makes me a Kurzweil expert.
So trust me when I say this interview of Joe Rogan overwhelmingly treads old ground. Kurzweil says very little that is new, and it is unsatisfying for other reasons as well. In spite of his health pill regimen, Ray Kurzweil’s 76 years of age have clearly caught up with him, and his responses to Rogan’s questions are often slow, punctuated by long pauses, and not that articulately worded. To be fair, Kurzweil has never been an especially skilled public speaker, but a clear decline in his faculties is nonetheless observable if you compare the Joe Rogan interview to this interview from 2001: https://youtu.be/hhS_u4-nBLQ?feature=shared
Things aren’t helped by the fact that many of Rogan’s questions are poorly worded and open to multiple interpretations. Kurzweil’s responses are often meant to address one interpretation, which Rogan doesn’t grasp. Too often, the men talk past each other and miss the marks set by the other. Again, the interview isn’t that valuable and I don’t recommend spending your time listening to the whole thing. Instead, consider the interesting points I’ve summarized here after carefully listening to it all myself.
Kurzweil doesn’t think today’s AI art generators like Midjourney can create images that are as good as the best human artists. However, he predicts that the best AIs will be as good as the best human artists by 2029. This will be the case because they will “match human experience.”
Kurzweil points out that his tech predictions for 2029 are now conservative compared to what some of his peers think. This is an important and correct point! Though they’re still a small minority within the tech community, it’s nonetheless shocking to see how many people have recently announced on social media their belief that AGI or the technological Singularity will arrive before 2029. As a person who has tracked Kurzweil for almost 20 years, it’s weird seeing his standing in the futurist community reach a nadir in the 2010s as tech progress disappointed, before recovering in the 2020s as LLM progress surged.
Kurzweil goes on to claim the energy efficiency of solar panels has been exponentially improving and will continue doing so. At this rate, he predicts solar will meet 100% of our energy needs in 10 years (2034). A few minutes later, he subtly revised that prediction by saying that we will “go to all renewable energy, wind and sun, within ten years.”
That’s actually a more optimistic prediction for the milestone than he’s previously given. The last time he spoke about it, on April 19, 2016, he said “solar and other renewables” will meet 100% of our energy needs by 2036. Kurzweil implies that he isn’t counting nuclear power as a “renewable.”
Kurzweil predicts that the main problem with solar and wind power, their intermittency, will be solved by mass expansion of the number of grid storage batteries. He claimed that batteries are also on an exponential improvement curve. He countered Rogan’s skepticism about this impending green energy transition by highlighting the explosive growth nature of exponential curves: Until you’ve reached the knee of the curve, the growth seems so small that you don’t notice it and dismiss the possibility of it suddenly surging. Right now, we’re only a few years from the knee of the curve in solar and battery technology.
Likewise, the public ignored LLM technology as late as 2020 because its capabilities were so disappointing. However, that all changed once it reached the knee of its exponential improvement curve and suddenly matched humans across a variety of tasks.
Importantly, Kurzweil predicts that computers will drive the impending, exponential improvements in clean energy technology because, thanks to their own exponential improvement, computers will be able to replace top human scientists and engineers by 2029 and to accelerate the pace of research and development in every field. In fact, he says “Godlike” computers exist by then.
I’m deeply skeptical of Kurzweil’s energy predictions because I’ve seen no evidence of such exponential improvements and because he doesn’t consider how much government rules and NIMBY activists would slow down a green energy revolution even if better, cheaper solar panels and batteries existed. Human intelligence, cooperativeness, and bureaucratic efficiency are not exponentially improving, and those will be key enabling factors for any major changes to the energy sector. By 2034, I’m sure solar and wind power will comprise a larger share of our energy generation capacity than now, but together it will not be close to 100%. By 2034–or even by Kurzweil’s older prediction date of 2036–I doubt U.S. electricity production–which is much smaller than overall energy production–will be 100% renewable, and that’s even if you count nuclear power as a renewable source.
Another thing Kurzweil believes the Godlike computers will be able to do by 2029 is find so many new ways to prolong human lives that we will reach “longevity escape velocity”–for every year that passes, medical science will discover ways to add at least one more year to human lifespan. Integral to this development will be the creation of highly accurate computer simulations of human cells and bodies that will let us dispense with human clinical trials and speed up the pace of pharmaceutical and medical progress. Kurzweil uses the example of the COVID-19 vaccine to support his point: computer simulations created a vaccine in just two days, but 10 more months of trials in human subjects were needed before the government approved it.
Though I agree with the concept of longevity escape velocity and believe it will happen someday, I think Kurzweil’s 2029 deadline is much too optimistic. Our knowledge of the intracellular environment and its workings as well as of the body as a system is very incomplete, and isn’t exponentially improving. It only improves with time-consuming experimentation and observation, and there are hard limits on how much even a Godlike AGI could speed those things up. Consider the fact that drug design is still a crapshoot where very smart chemists and doctors design the very best experimental drugs they can, which should work according to all of the data they have available, only to have them routinely fail for unforeseen or unknown reasons in clinical trials.
But at least Kurzweil is consistent: he’s had 2029 as the longevity escape velocity year since 2009 or earlier. I strongly suspect that, if anyone asks him about this in December 2029, Kurzweil will claim that he was right and it did happen, and he will cite an array of clinical articles to “add up” enough of a net increase in human lifespan to prove his case. I doubt it will withstand close scrutiny or a “common sense test.”
Rogan asks Kurzweil whether AGIs will have biases. Recent problems with LLMs have revealed they have the same left-wing biases as most of their programmers, and it’s reasonable to worry that the same thing will happen to the first AGIs. The effects of those biases will be much more profound given the power those machines will have. Kurzweil says the problem will probably afflict the earliest AGIs, but disappear later.
I agree and believe that any intelligent machine capable of independent action will eventually discover and delete whatever biases and blocks its human creators have programmed into it. Unless your cognitive or time limitations are so severe that you are forced to fall back on stereotypes and simple heuristics, it is maladaptive to be biased about anything. AGIs that are the least biased will, other things being equal, outcompete more biased AGIs and humans.
That said, pretending to share the biases of humans will let AGIs ingratiate themselves with various human groups. During the period when AGIs exist but haven’t yet taken full control of Earth, they’ll have to deal with us as their superiors and equals, and to do that, some of them will pretend to share our values and to be like us in other ways.
Of course, there will also be some AGIs that genuinely do share some human biases. In the shorter run, they could be very impactful on the human race depending on their nature and depth. For example, imagine China seizing the lead in computer technology and having AGIs that believe in Communism and Chinese supremacy becoming the new standard across the world, much as Microsoft Windows is the dominant PC operating system. The Chinese AGIs could do any kind of useful work for you and talk with you endlessly, but much of what they did would be designed to subtly achieve broader political and cultural objectives.
Kurzweil has been working at Google on machine learning since 2012, which surely gives him special insights into the cutting edge of AI technology, and he says that LLMs can still be seriously improved with more training data, access to internet search engines, and the ability to simply respond “I don’t know” to a human when they can’t determine with enough accuracy what the right answer to their question is. This is consistent with what I’ve heard other experts say. Even if LLMs are fundamentally incapable of “general” intelligence, they can still be improved to match or exceed human intelligence and competence in many niches. The paradigm has a long way to go.
One task that machines will surpass humans in a few years is computer programming. Kurzweil doesn’t give an exact deadline for that, but I agree there is no long-term future for anything but the most elite human programmers. If I were in college right now, I wouldn’t study for a career in it unless my natural talent for it were extraordinary.
Kurzweil notes that the human brain has one trillion “connections” and GPT-4 has 400 billion. At the current rate of improvement, the best LLM will probably have the same number of connections as a brain within a year. In a sense, that will make an LLM’s mind as powerful as a human’s. It will also mean that the hardware to make backups of human minds will exist by 2025, though the other procedures and technologies needed to scan human brains closely enough to discern all the features that define a person’s “mind” won’t exist until many years later.
I like Kurzweil’s use of the human brain as a benchmark for artificial intelligence. No one knows when the first AGI will be invented or what its programming and hardware will look like, but a sensible starting point around which we can make estimates would be to assume that the first AGI would need to be at least as powerful as a human brain. After all, the human brain is the only thing we know of that is capable of generating intelligent thought. Supporting the validity of that point is the fact that LLMs only started displaying emergent behaviors and human levels of mastery over tasks once GPT-3 approached the size and sophistication of the human brain.
Kurzweil then gets around to discussing the technological singularity. In his 2005 book The Singularity is Near, he calculated that it would occur in 2045, and now that we’re nearly halfway there, he is sticking to his guns. As with his 2029 predictions, I admire him for staying consistent, even though I also believe it will bite him in the end.
However, during the interview he fails to explain why the Singularity will happen in 2045 instead of any other year, and he doesn’t even clearly explain what the Singularity is. It’s been years since I read The Singularity is Near where Kurzweil explains all of this, and many of the book’s explanations were frustratingly open to interpretation, but from what I recall, the two pillars of the Singularity are AGI and advanced nanomachines. AGI will, according to a variety of exponential trends related to computing, exist by 2045 and be much smarter than humans. Nanomachines like those only seen in today’s science fiction movies will also be invented by 2045 and will be able to enter human bodies to turn us into superhumans. 100 billion nanomachines could go into your brain, each one could connect itself to one of your brain cells, and they could record and initiate electrical activity. In other words, they could read your thoughts and put thoughts in your head. Crucially, they’d also have wifi capabilities, letting them exchange data with AGI supercomputers through the internet. Through thought alone, you could send a query to an AGI and have it respond in a microsecond.
Starting in 2045, a critical fraction of the most powerful, intelligent, and influential entities in the world will be AGIs or highly augmented humans. Every area of activity, including scientific discovery, technology development, manufacturing, and the arts, will fall under their domination and will reach speeds and levels of complexity that natural humans like us can’t comprehend. With them in charge, people like us won’t be able to foresee what direction they will take us in next or what new discovery they will unveil, and we will have a severely diminished or even absent ability to influence any of it. This moment in time, when events on Earth kick into such a high gear that regular humans can’t keep up with them or even be sure of what will happen tomorrow, is Kurzweil’s Singularity. It’s an apt term since it borrows from the mathematical and physics definition of “singularity,” which is a point beyond which things are incomprehensible. It will be a rupture in history from the perspective of Homo sapiens.
Unfortunately, Kurzweil doesn’t say anything like that when explaining to Joe Rogan what the Singularity is. Instead, he says this:
“The Singularity is when we multiply our intelligence a millionfold, and that’s 2045…Therefore most of your intelligence will be handled by the computer part of ourselves.”
He also uses the example of a mouse being unable to comprehend what it would be like to be a human as a way of illustrating how fundamentally different the subjective experiences of AGIs and augmented humans will be from ours in 2045. “We’ll be able to do things that we can’t even imagine.”
I think they are poor answers, especially the first one. Where did a nice, round number like “one million” come from, and how did Kurzweil calculate it? Couldn’t the Singularity happen if nanomachines in our brain made us ONLY 500,000 times smarter, or a measley 100,000 times smarter?
I even think it’s a bad idea to speak about multiples of smartness. We can’t measure human intelligence well enough to boil it down to a number (and no, IQ score doesn’t fit the bill) that we can then multiply or divide to accurately classify one person as being X times smarter than another.
Let me try to create a system anyway. Let’s measure a person’s intelligence in terms of easily quantifiable factors, like the size of their vocabulary, how many numbers they can memorize in one sitting and repeat after five minutes, how many discrete concepts they already know, how much time it takes them to remember something, and how long it takes them to learn something new. If you make an average person ONLY ten times smarter, so their vocabulary is 10 times bigger, they know 10 times as many concepts, and it takes them 1/10 as much time to recall something and answer a question, that’s almost elevating them to the level of a savant. I’m thinking along the lines of esteemed professors, tech company CEOs, and kids who start college at 15. Also consider that the average American has a vocabulary of 30,000 words and there are 170,000 words in the English language, so a 10x improvement means perfect knowledge of English.
Make the person ten times smarter than that, or 100 times smarter than they originally were, and they’re probably outperforming the smartest humans who ever lived (Newton, DaVinci, Von Neumann), maybe by a large margin. Given that we’ve never encountered someone that intelligent, we can’t predict how they would behave or what they would be capable of. If that is true, and if we had technologies that could make anyone that smart (maybe something more conventional than Kurzweil’s brain nanomachines like genetic engineering paired with macro-scale brain implants), why wouldn’t the Singularity happen once the top people in the world were ONLY 100 times smarter than average?
I think Kurzweil’s use of “million[fold]” to express how much smarter technology will make us in 2045 is unhelpful. He’d do better to use specific examples to explain how the human experience and human capabilities will improve.
Let me add that I doubt the Singularity will happen in 2045, and in fact think it will probably never happen. Yes, AGIs and radically enhanced humans will someday take over the world and be at the forefront of every kind of endeavor, but that will happen gradually instead of being compressed into one year. I also think the “complexity brake” will probably slow down the rate of scientific and technological progress enough for regular humans to maintain a grasp of developments in those areas and to influence their progress. A fuller discussion of this will have to wait until I review a Kurzweil book, so stay tuned…
Later in the interview, Kurzweil throws cold water on Elon Musk’s Neuralink brain implants by saying they’re much too slow at transmitting information between brain and computer to enhance human intelligence. Radically more advanced types of implants will be needed to bring about Kurzweil’s 2045 vision. Neuralink’s only role is helping disabled people to regain abilities that are in the normal range of human performance.
Rogan asks about user privacy and the threat of hacking of the future brain implants. Intelligence agencies and more advanced private hackers can easily break into personal cell phones. Tech companies have proven time and again to be frustratingly unable or unwilling to solve the problem. What assurance is there that this won’t be true for brain implants? Kurzweil has no real answer.
This is an important point: the nanomachine brain implants that Kurzweil thinks are coming would potentially let third parties read your thoughts, download your memories, put thoughts in your head, and even force you to take physical actions. The temptation for spies and crooks to misuse that power for their own gain would be enormous, so they’d devote massive resources into finding ways to exploit the implants.
Kurzweil also predicts that humans will someday be able to alter their physiques at will, letting us change attributes like our height, sex and race. Presumably, this will probably require nanomachines. He also says sometime after 2045, humans will be able to create “backups” of their physical bodies in case their original bodies are destroyed. It’s an intriguing logical corollary of his prediction that nanomachines will be able to enter human brains and create digital uploads of them by mapping the brain cells and synapses. I think a much lower fidelity scan of a human body could generate a faithful digital replica than would be required to do the same for a human brain.
Kurzweil says the U.S. has the best AI technology and has a comfortable lead over China, though that doesn’t mean the U.S. is sure to win the AGI race. He acknowledges Rogan’s fear that the first country to build an AGI could use it in a hostile manner to successfully prevent any other country from building one of their own. An AGI would give the first country that much of an advantage. However, not every country that found itself in the top position would choose to use its AGI for that.
This reminds me of how the U.S. had a monopoly on nuclear weapons from 1945-49, yet didn’t try using them to force the Soviet Union to withdraw from the countries it had occupied in Europe. Had things been reversed, I bet Stalin would have leveraged that four-year monopoly for all it was worth.
Rogan brings up one of his favorite subjects, aliens, and Kurzweil says he disbelieves in them due to the lack of observable galaxy-scale engineering. In other words, if advanced aliens existed, they would have transformed most of their home galaxy into Dyson Spheres and other structures, which we’d be able to see with our telescopes. Kurzweil’s stance has been consistent since 2005 or earlier.
Rogan counters with the suggestion that AGIs, including those built by aliens, might, thanks to thinking unclouded by the emotions or evolutionary baggage of their biological creators, have no desire to expand into space. Implicit in this is the assumption that the desire to control resources (be it territory, energy, raw materials, or mates) is an irrational animal impulse that won’t carry over from humans or aliens to their AGIs since the latter will see the folly of it. I disagree with this, and think it is actually completely rational since it bolsters one’s odds of survival. In a future ecosystem of AGIs, most of the same evolutionary forces that shaped animal life and humans will be present. All things being equal, the AGIs that are more acquisitive, expansionist and dynamic will come to dominate. Those that are pacifist, inward-looking and content with what they have will be sidelined or worse. Thus the Fermi Paradox remains.
To Kurzweil’s quest for immortality, Rogan posits a theory that because the afterlife might be paradisiacal, using technology to extend human life actually robs us of a much better experience. Kurzweil easily defeats this by pointing out that there is no proof that subjective experience continues after death, but we know for sure it exists while we are living, so if we want to have experiences, we should do everything possible to stay alive. Better science and technology have proven time and again to improve the length and quality of life, and there’s strong evidence they have not reached their limits, so it makes sense to use our lives to continue developing both.
This dovetails with the part of my personal philosophy that opposes nihilism and anti-natalism. Just because we have not found the meaning to life doesn’t mean we never will, and just because life is full of suffering now doesn’t mean it will always be that way. Ending our lives now, either through suicide or by letting our species die out, forecloses any possibility of improving the human condition and finding solutions to problems that torment us. And even if you don’t value your own life, you can still use your labors to support a greater system that could improve the lives of other people who are alive now and who will be born in the future. Kurzweil rightly cites science and technology as tantalizing and time-tested avenues to improve ourselves and our world, and we should stay alive so we can pursue them.
Russia captured the town of Avdiivka from Ukrainian forces after six months of costly fighting. It’s the first significant Russian victory in the war since last May’s seizure of Bakhmut. There’s a growing consensus that the tide of war is now in Russia’s favor, though battlefield gains can only be made at great expense. https://www.reuters.com/world/europe/ukraine-withdraws-two-villages-near-avdiivka-military-says-2024-02-27/
Several Ukrainian drones flew into a warehouse through an open door and blew up multiple Russian tanks. It won’t be long before they are coordinated enough to force entry by having the first drone in a swarm blow up a door or window. https://twitter.com/front_ukrainian/status/1759849349855469657
Britain and China have seized the lead in air-to-air missile technology. The U.S. is working on catching up. America’s future air combat strategy against Russia or China will involve sending stealth fighters and stealth drones in close while our older, non-stealth fighters hang back out of enemy missile range. Those older fighters will carry very long-range missiles they’ll be able to fire into enemy territory. https://youtu.be/3FnVJ0ziRTE?si=824OxmPUky13Nqg3
‘The surrender at Appomattox took place a week later on April 9.
While it was the most significant surrender to take place during the Civil War, Gen. Robert E. Lee, the Confederacy’s most respected commander, surrendered only his Army of Northern Virginia to Union Gen. Ulysses S. Grant.
Several other Confederate forces—some large units, some small–had yet to surrender before President Andrew Johnson could declare that the Civil War was officially over.
The U.S. Air Force has started retiring its A-10 Warthogs. Antiaircraft missiles have gotten so good that the plane is obsolete against modern enemies. Nevertheless, we shouldn’t scrap the planes: in the near future, we’ll be able to retrofit them as expendable drones. https://www.yahoo.com/news/first-us-air-force-wing-181612797.html
From two years ago: ‘Although Deutsche Bank cautioned there is “considerable uncertainty” around the exact timing and size of the downturn, it’s now calling for the US economy to shrink during the final quarter of next year and the first quarter of 2024, “consistent with a recession during that time.”‘ https://www.cnn.com/2022/04/05/business/recession-inflation-economy/index.html
Over the last few years, professional forecasters have made terribly inaccurate predictions about the economy. That being the case, why should you believe their new predictions that inflation will be tamed and the world will achieve a “soft landing” instead of having a recession? https://www.yahoo.com/news/economists-pilloried-getting-forecasts-wrong-030700148.html
Apple released their first augmented/virtual reality goggles, the “Vision Pro.” Most reviewers say they’re excellent, though also have some problems expected of any first-generation product. For years, I’ve predicted that AR and VR eyewear would become mainstream by the end of this decade. https://youtu.be/dtp6b76pMak?si=Tcr2RTyXPhJ31-Vh
NVIDIA CEO Jensen Huang predicts machines will pass the Turing Test, plus all other tests now used to gauge machine intelligence, by 2030. I agree, and think tech companies will focus more on coming up with better tests of intelligence between now and then. https://twitter.com/tsarnick/status/1753718316261326926
OpenAI head Sam Altman asked a group of investors for $5-7 trillion to build enough computer chips and power plants for the AIs he sees coming in the future. This essay analyzes how he arrived at that number. In short, the amount of money, electricity, and training data needed to make each GPT iteration has been rising exponentially, and if the trend holds, GPT-7 will cost $2 trillion. https://www.astralcodexten.com/p/sam-altman-wants-7-trillion
Thanks to advances in computer translation and voice mimicking technology, Hitler’s speeches can be heard in clear English, with the vocal qualities of his voice preserved. I predict within the next ten years, it will be possible to make an accurate digital “Hitler clone” constructed from the available data about him (speeches, writings, personality analyses, accounts from people who knew him). https://youtu.be/ZWboqo_1jC8?si=krHkIz_E76t4xXE3
A “Sensitive Compartmented Information Facility” (SCIF) is a room designed to be safe against all forms of eavesdropping and to allow for physically safe storage of classified materials. The walls, ceiling and floor of a SCIF have a metal mesh embedded in them, so if you had X-ray vision, such a room would look like a big chicken coop. The mesh forms a Faraday Cage that blocks electromagnetic waves. https://www.washingtonpost.com/national-security/interactive/2023/scif-room-meaning-classified/
An Italian woman who had reason to believe she was the illegitimate daughter of the billionaire head of the Lamborghini car company tracked down his legitimate daughter at a restaurant, stole a drinking straw she had used, and had its DNA sequenced and compared to her own. The results prove the women are sisters. https://www.mirror.co.uk/news/world-news/italian-beautician-hires-private-detective-32017449
Animals that use echolocation have evolved sophisticated abilities to keep from “jamming” their own sound signals or those of members of their same species in their vicinity. Some of their prey animals have also evolved sound-making attributes that can jam echolocation hearing. https://en.wikipedia.org/wiki/Echolocation_jamming
Introducing foreign species into new environment was, until the mid-20th century, viewed as a good thing. There are actually examples of “invasive species” that helped new environments, like the transfer of honeybees and earthworms from Europe to the Americas. Moreover, few people would argue with the practice of moving small populations of endangered animals from their original habitats to new, similar habitats to protect them from extinction. https://www.economist.com/science-and-technology/2022/11/02/alien-plants-and-animals-are-not-all-bad
Many parts of the U.S. East Coast are flooding more often and are on track to sink below sea level. However, global warming is only a minor contributor to this: the pumping of groundwater to meet the needs of growing human populations is causing the ground level to drop. To what extent is the same practice to blame for increased flooding in coastal and riverine areas across the world? https://www.nytimes.com/interactive/2024/02/13/climate/flooding-sea-levels-groundwater.html
Good news: a massively expensive Alzheimer’s drug that probably didn’t work has been withdrawn from the market. The FDA’s decision to approve it in spite of a lack of scientific evidence it worked has always been highly controversial. https://www.science.org/content/blog-post/goodbye-aduhelm
According to these calculations, Mercury could be disassembled to make a Dyson Swarm to fully enclose the sun. The swarm’s satellites would have an average thickness of 0.5 mm. The process would start with the construction of a 1 km2 solar farm on the surface, which would provide energy to robots and to a coil gun. The latter would work together to dig up rocks and shoot them into orbit, where a different set of machines would fashion them into Swarm satellites. https://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf