Russian defense missiles intercepted U.S.-made missiles that Ukraine fired at Crimea. The midair explosions from the two groups of missiles colliding hurled shrapnel down onto a packed beach, killing five Russian civilians and injuring more.
The Ukrainians almost certainly weren’t targeting civilians, and their missiles were probably headed for Russian warships or bases. Nevertheless, Russia has sworn revenge. https://www.bbc.com/news/articles/c6pppr719rlo
Russia now has three captured M1 Abrams tanks. Each one is damaged in a different way. I bet their working parts could be combined to make one working tank. https://youtu.be/yBhYcMb8Tng?si=kYf17lu3a_eyjV2e
The carousel autoloader found in Soviet and Russian tanks isn’t necessarily a fatal design flaw. If the Russians copied the gunpowder from the advanced ammunition they found in the captured German Leopard 2 tank, then their own tanks would become much less likely to blow up thanks to their own ammo cooking off. https://youtu.be/6A4CqxGMBQw?si=DUc_wyPlbWI7QgHv
The third variant of the British WWII Sten sub machine gun was one of the simplest and cheapest guns ever made. It’s interesting to see how those factors hurt its reliability and longevity, even compared to other Sten variants. The Mark III was truly a throwaway weapon. https://youtu.be/W0qlOOE8G_k?si=LNbw1XOwYr5urr5A
The man who invented a small add-on device that turns any Glock into a full-auto weapon is a mechanical genius from Venezuela. He created the first device in the late 1980s while he was still a teenager and working at a gun shop. Only in recent years has the device started becoming common among criminals. https://www.yahoo.com/gma/feel-terrified-inventor-glock-switch-090429902.html
Nvidia briefly became the world’s most valuable company, with a market cap over $3.4 trillion. It was worth $418 billion just two years ago. https://www.bbc.com/news/articles/cyrr40x0z2mo
Without using the term “singularity,” math whiz and high-ranking OpenAI staff member “Leopold Aschenbrenner” published a paper claiming that milestone is upon us. Radical changes thanks to AI will happen in just the next ten years. https://situational-awareness.ai/
‘In February, AI-based forgery reached a watershed moment–the OpenAI research company announced GPT-2, an AI generator of text so seemingly authentic that they deemed it too dangerous to release publicly for fears of misuse. Sample paragraphs generated by GPT-2 are a chilling facsimile of human- authored text. Unfortunately, even more powerful tools are sure to follow and be deployed by rogue actors.’ https://hbr.org/2019/03/how-will-we-prevent-ai-based-forgery
According to the 1999 movie The Thirteenth Floor, by June 21, 2024 we were supposed to have had AGI, full immersion virtual reality like The Matrix, lifelike digital worlds, and really cool-looking glass skyscrapers in L.A. https://youtu.be/UCsR9iPvX0I?si=iLM31LuQMMcC5Q1u
This prediction was accurate:
“I think Joe Biden will run again in 2024 and I think he will run against someone with the last name ‘Trump.’ I do not know whether that is Trump or Trump Jr…”
There’s a reason why the classic Spielberg movies have slightly “off” colors that make them look old fashioned in a subtle way: the film stock used back then had a limited color range. It would be interesting to use AI to reverse those distortions and create rereleases of those films that are true to the actual colors on set. https://youtu.be/kQmIPWK8aXc?si=7L2nPuUZ9aaVU_EM
Some forest fires are actually caused by bacteria. Just like a human-made compost pile, underground peat deposits can get extremely hot due to the metabolisms of bacteria that inhabit and eat them. Furthermore, sudden jumps in surface temperature caused by heat waves can cause the bacteria to raise the peat temperature by even more. The result is spontaneous combustion. https://theconversation.com/zombie-fires-in-the-arctic-smoulder-underground-and-refuse-to-die-whats-causing-them-221945
‘Breeders also value posture, hoof solidity, docility, maternal ability and beauty. Those eager to level up their livestock’s genetics pay around $250,000 for an opportunity to collect Viatina-19’s egg cells.
And there are other ways the Drake Equation could be tweaked to result in humans being the only intelligent species in the galaxy. ‘According to Stern and Gerya, it’s likely quite rare for planets to have both continents and oceans along with long-term plate tectonics, and this possibility needs to be factored into the Drake Equation.’ https://gizmodo.com/drake-equation-update-fermi-paradox-intelligent-life-1851503974
Researchers have discovered a gene that causes obesity in some people. Genetic engineering and new medical interventions will end the global obesity problem in the future. The average person will be taller, thinner, stronger, and healthier. https://www.cnn.com/2024/06/20/health/obesity-genetic-wellness/index.html
It won’t be long before you’ll be able to feed a computer a script or the text of a book, and it will be able to produce a professional-quality audiobook or film. It would be so fascinating to finally see the great, unmade movies (like Stanley Kubrick’s epic biopic about Napoleon) or to see movies that stayed true to their written source material so they could be compared with what was actually made. Jurassic Park comes to mind as a famous movie that diverged greatly from the book. Imagine the same, CGI-generated characters in the same island setting, with the same style of soundtrack and cinematography, but with different dialog and different plot points than happened in the film we all know.
Will RV living and houseboat living be the norm in the future? Think about it: If humans won’t have jobs in the future, then they won’t have enough money to buy houses, making RVs and boats the only affordable option. Even a bus-sized recreational vehicle is only 1/3 the price of a typical American home, and a houseboat with the same internal volume is 2/3 the price. Also, without jobs, humans would have much less of a reason to stay tethered to one location and could indulge in their wanderlust. Additionally, thanks to VR being more advanced, people won’t need large TVs or computer monitors, easing the need for spacious living rooms.
Humans talking about the need to control AGI to ensure our dominance is not threatened are like Homo erectus grunting to each other about the need to keep Homo sapiens down somehow. It’s understandable for a dominant species to want to preserve its status, but that doesn’t mean such a thing is in the best interests of civilization.
It’s still unclear whether LLMs will ever achieve general intelligence. A lot of hope rests on “scaffolded systems,” which are LLMs that also have more specialized computer apps at their disposal, which they’re smart enough to know to use to solve problems that the LLM alone can’t.
Part of me thinks of this as “cheating,” and that a scaffolded system would still not be a true general intelligence since, as we assigned it newer and broader tasks, it would inevitably run into new types of problems it couldn’t solve but humans could because it lacked the tool for doing so.
But another part of me thinks the human brain might also be nothing more than a scaffolded system that is comprised of many small, specialized minds that are only narrowly intelligent individually, but give rise to general intelligence as an emergent property when working together (Marvin Minsky’s “Society of Mind” describes this). Moreover, we consider the average human to be generally intelligent even though there are clearly mental tasks that they can’t do. For example, through no amount of study and hard work could an average IQ person get a Ph.D in particle physics from MIT, meaning they could never solve cutting-edge problems in that field. (This has disturbing implications for how we’ve defined “general intelligence” and implies that humans actually just inhabit one point in a “space of all possible intelligent minds.”) So if an entity’s fundamental inability to handle specific cognitive tasks proves they lack general intelligence, then humans are in trouble. We shouldn’t hold future scaffolded systems to intelligence standards we don’t hold ourselves to.
Moreover, it’s clear that humans spend many of their waking hours on “mental autopilot,” where they aren’t exercising “general intelligence” to navigate the world. An artificial mind that spent most of its time operating in simpler modes guided by narrow AI modules could therefore be just as productive and as “smart” as humans in performing routine and well-defined tasks.
In spite of these heavy losses, Russia has so many weapons left over from the Soviet era that it won’t run out of them, even at current loss rates, for two or three years. As the shortages near the critical threshold, I predict Russia will make up for it by starting to import old Soviet and Soviet-compatible weapons from friendly countries like North Korea. https://www.iiss.org/online-analysis/military-balance/2024/02/equipment-losses-in-russias-war-on-ukraine-mount/
Missiles and artillery fired from within Russia have been hitting targets inside of Ukraine. The Ukrainians have Western-made weapons with the ranges to destroy those Russian sites, but donors like the U.S. refuse to let Ukraine use them against Russian territory for fear it will lead to an expansion of the fighting. There’s a growing consensus among Western leaders that they should ease the rule and let Ukraine use their weapons to attack Russian soil. Putin is warning that this would lead to “serious consequences.”
President Biden has said he will withhold some military aid to Israel if it sends ground troops into the last Palestinian-controlled city in Gaza, Rafah. There are widely held fears that such an operation would kill large numbers of civilians. https://www.cnn.com/2024/05/08/politics/joe-biden-interview-cnntv/index.html
Israeli troops seized control of Gaza’s land border with Egypt. They claimed it was necessary to shut down secret tunnels that were being used to smuggle things across the border. https://www.bbc.com/news/articles/c1994g22ve9o
Peter Zeihan’s dour 2019 predictions about the future of China’s economy have proven accurate:
“So I would argue that fixing this [by] deflating the bubble, I think that I think that ship sailed 20 years ago, and so the question becomes ‘is this triggering going to be internal or external?’ Let’s start with internal. Demographically, we are going to be seeing a contraction in Chinese domestic economic activity simply because of demographics within the next five years.”
An economic “contraction” doesn’t necessarily mean negative growth; it can mean a sharp decrease in the positive growth rate. For example, if my personal income rises by $5,000 per year, but then one year the growth rate shifts down to only a $1,000 increase each year, in economic terms I’ve experienced a contraction. China’s GDP growth rate and domestic spending growth rate are both way down from where they were in 2019 when Zeihan made his prediction.
OpenAI unveiled it’s latest and most advanced LLM, “GPT-4o”. At the demo, the machine was able to carry on a conversation with a human presenter in a totally natural and intelligent manner. https://www.youtube.com/live/DQacCB9tDaw?si=GPXXv9mHoh5NcA1d
Actress Scarlett Johansson claimed OpenAI had cloned her voice without her permission to synthesize GPT-4o’s voice, and quickly sued the company. Though they say they didn’t break the law and used a different human to create the voice, OpenAI nonetheless disabled the voice feature indefinitely. Scarlett Johansson famously voiced “Samantha,” a sentient AI character in the 2013 movie Her. https://www.foxbusiness.com/entertainment/openai-accused-mimicking-scarlett-johansson-tech-company-pauses-chatgpt-voice
GPT-4 has passed the five-minute Turing Test. “GPT-4 was judged to be a human 54% of the time, outperforming ELIZA (22%) but lagging behind actual humans (67%).” https://arxiv.org/abs/2405.08007
A large number of AI safety staff quit OpenAI nearly at once. While NDA’s prevent most of them from talking about it, people in the know say they were unhappy with Sam Altman’s dishonesty and lack of commitment to their mission. https://www.lesswrong.com/posts/ASzyQrpGQsj7Moijk/openai-exodus
‘AI might wreak havoc on traditional studio moviemaking, with its massive budgets and complex technical requirements. But in the process, it is likely to make high-quality filmmaking much less expensive and logistically arduous, empowering smaller, nimbler, and less conventional productions made by outsiders with few or no connections to the studio system.’ https://reason.com/2024/05/25/ai-is-coming-for-hollywoods-jobs/
An important lesson from the last few years is that job automation will sweep across the workforce in unexpected ways. For example, no one believed jobs involving artistry would be automated before jobs involving simple physical labor, like flipping burgers. It might prove more profitable for companies to replace their leaders with AIs sooner than they replace their assembly line workers.
Regardless, keep in mind there’s probably no limit to how far job automation can go. In 50 years, if you’re part of that lucky 1% of the adult human population that still has a “real job,” don’t gloat at the unemployed masses because it will only be a few years before your position is also taken by a machine. https://www.nytimes.com/2024/05/28/technology/ai-chief-executives.html
Here’s a very fascinating case study of a young Mexican man who was born deaf and whose parents never taught him sign language. As a result, he never developed any kind of linguistic ability and had a totally different way of thinking (he lacked “symbolic thinking” and couldn’t conceive of attaching names to objects) and dealing with people. After illegally immigrating to the U.S., a linguist stumbled upon him and slowly taught him sign language.
‘As part of her discussion of the human rights of the deaf, Schaller makes the argument, familiar also from Benjamin Whorf (and also brought up in the commentary on Henrich’s WEIRD article) that language diversity itself is an insight into human cognitive diversity: ‘Every language is an outcome of how the human brain works. We don’t know how much we can do with our one brain, even, and each language has used the brain in a slightly different way.’ However, there’s an even deeper and more profound cognitive diversity in her discussion of Ildefonso: the possibility of language-less human thought, something that theorists like Merlin Donald have attempted to discuss.’ https://neuroanthropology.net/2010/07/21/life-without-language/
Something that makes no sense in Star Wars and many other space movies is the inability of spacecraft to quickly point in any direction to bring their guns to bear on the enemy. Usually there’s a good guy fighter plane being pursued by a bad guy fighter plane, and the good guy yells out “I can’t see him because he’s behind me! Help!”
In reality, since there’s no air resistance to deal with in space, the good guy could instantly flip his fighter plane around and shoot the bad guy. You see two examples of that in the movie “Oblivion.” https://youtu.be/zRvXcyznOsQ?si=86sSlxUQHrvnw4Nc
In 1999, the Space Shuttle Columbia nearly suffered a catastrophe that would have forced it to attempt an emergency landing back on Earth right after it lifted off. https://youtu.be/qiJMdfj9NmI?si=g-PHc0zHoyTXtF0M
‘Overall, this is very impressive performance, although I should note that it is not up to the various headlines of “AlphaFold 3 Predicts All The Molecules of Life!” and so on. In almost every area it’s a significant improvement over anything that we’ve had before – including previous AlphaFold versions – and in some of them (protein-antibody and protein-RNA) it appears to be (for now!) the only game in town, even though it’s not an infallible oracle in those cases by any means.’ https://www.science.org/content/blog-post/alphafold-3-debuts
‘These results strongly suggest Neanderthal-derived DNA is playing a significant role in autism susceptibility across major populations in the United States.’ https://www.nature.com/articles/s41380-024-02593-7
There are many types of mental disability and they have many different causes. Among them are mutations to single genes. A new gene that causes it, RNU4-2, has just been discovered. 0.41% of mentally disabled people have the condition due to it.
Better knowledge of the human genome and cheaper prenatal DNA screening will let us reduce the population prevalence of mental and physical disorders in the future. https://pubmed.ncbi.nlm.nih.gov/38645094/
Sony has created a tiny robot that can do precise microsurgeries. In this video, it makes an incision in a corn kernel and then stitches it up. https://youtu.be/bgRAkBNFMHk?si=LmjjLDkwgHp4zbgp
My last blog entry, “What my broken down car taught me about the future,” has compelled me to write a new essay that shows how some of its insights will apply more generally in the future, and not just to cars and related industries. Due to several factors, manufactured objects will generally last much longer in the future, and sudden catastrophic failures of things will be much less common.
Things will be made of better materials
Better computers that can more accurately mimic the atomic forces and chemical reactions will be able to run simulations that lead to the discovery of new types of alloys and molecules. Those same computers will, perhaps with the aid of industrial and lab robots, also find the best ways to synthesize the new materials. Finally, the use of machine labor at every step of this process will basically eliminate labor costs, allowing the materials to be produced at lower cost than they could be with human workers today.
This means in the future we will have new kinds of metal alloys, polymers and crystals that have physical properties superior to whatever counterparts we have today. Think of a bulletproof vest that is more flexible and only half as heavy as Kevlar, or a wrench that is lighter than a common steel wrench but just as tough. And since machines will make all of these materials at lower cost, more people will be able to afford them and they will be more common. For example, if carbon fiber were cheaper, more cars would incorporate them into their bodies, lowering their weight.
Things will be designed better
In my review of the movie Starship Troopers, I discussed why the fearsome assault rifle used by the human soldiers was flawed, and why it would never come into existence in the future:
It wouldn’t make sense for people in the future to abandon the principles of good engineering by making highly inefficient guns like the Morita. To the contrary, future guns will, just like every other type of manufactured object, be even more highly optimized for their functions thanks to AI: Just create a computer simulation that exactly duplicates conditions in the real world (e.g. – gravity, all laws of physics, air pressure, physical characteristics of all metals and plastics the device could be built from), let “AI engineers” experiment with all possible designs, and then see which ones come out on top after a few billion simulation cycles. I strongly suspect the winners will be very similar to guns we’ve already built, but sleeker and lighter thanks to the deletion of unnecessary mass and to the use of materials with better strength-to-weight ratios.
That same computer simulation process will be used to design all other types of manufactured objects in the future. Again, as computation gets cheaper, companies will be able to run simulations to find the optimal designs for every kind of object. Someday, even cheap, common objects like doorknobs will be the products of billions of computer simulations that stumbled on the optimal size and arrangement of components through trial-and-error experiments with slightly different combinations.
As a result, manufactured objects will be more efficient and robust than today, but most won’t look different enough for humans to tell they’re different from today’s versions of them. The difference will probably be more apparent in complex machines like cars.
Things will be made better
Even if a piece of technology is well-designed and made of quality materials, it can still be unreliable if its parts are not manufactured properly or if its parts aren’t put together the right way. Human factory workers cause these problems because of poor training, tiredness, intoxication, incompetence, or deliberate sabotage. It goes without saying that advanced robots will greatly improve the quality and consistency of factory-produced goods, as they will never be affected by fatigue or bad moods, and will follow their instructions with perfect accuracy and precision. As factories become more automated, defective products will become less common.
Things will be used more carefully
As I noted in the essay about cars, most cars have their lifespans cut prematurely short by the carelessness of their owners. Gunning the engine will wear it out sooner, speeding over potholes will destroy shocks, and generally reckless driving will raise the odds of a car accident that is so bad it totals the vehicle.
Every type of manufactured object has engineering limits beyond which it can’t be pushed without risking damage. Humans lack the patience and intelligence to learn what those limits are for every piece of technology we interact with, and we lack the fine senses to always stay below those limits. While trying to unscrew the rusted bolt, you WILL put so much torque on the wrench that you snap it.
On the other hand, machines will have the cognitive capacity to quickly learn what the engineering limits are for every object they encounter, the patience to use them without exceeding those limits, and the sensors (tactile, visual, auditory) to watch what it’s doing and how much force it is applying. No autonomous car will ever overstress its own engine or drive over a pothole so fast it breaks part of the suspension system, and no robot mechanic will ever snap its own wrench trying to unscrew a stuck bolt. As a consequence, the longevity of every type of manufactured object will increase, in some cases astonishingly. The average lifespan of a passenger vehicle could exceed 30 years, and a simple object like a knife might stay in use for 100 years (until it had been worn down by so many resharpenings that it was too thin to withstand any more use).
Things will be maintained better
Even if you have a piece of quality technology and use it carefully, it will still need periodic maintenance. A Mercedes-Benz 300 D, perhaps the most reliable car ever made, still needs oil changes. Your refrigerator’s coils need to be brushed clean of debris periodically. You hand tools need to be checked for rust and hairline cracks and sprayed down with some kind of moisture protectant. All of your smoke alarms must be tested for function once a month. It goes on and on. If you own even a small number of possessions, it’s amazing to learn how many different tasks you SHOULD be undertaking regularly to keep them maintained.
Needless to say, few people take proper care of their things. Usually they didn’t read the user manual, memorize the section on maintenance, set automatic digital reminders to themselves to perform the tasks, and then rigidly follow them for the rest of their lives. So sue them, they’re only humans with imperfect memories, limited personal time, and limited self-discipline.
Once advanced robots are ubiquitous, these human-specific factors will disappear. Your robot butler actually WOULD know what kind of upkeep every item in your house needed, and it would do it according to schedule. Operating around the clock (they won’t need to sleep and could plug themselves into wall outlets with extension cords for indefinite duration power), a robot butler could do an enormous amount of maintenance work for you and could devote itself to truly minuscule tasks like hunting down and finding tiny problems you never would have known existed.
I’m reminded of the time I noticed a strange sound in the bathroom of my house that I seldom use. It was the toilet, and the water was flowing through it continuously, making a loud trickling sound. After removing the lid, I immediately saw the problem existed because the flush lever–which was made of plastic–had snapped in half, causing the flapper to jam in the open position.
Upon close inspection I noticed something else wrong: The two, metal bolts that held the toilet tank to the bowl were so badly rusted that they had practically disintegrated! In fact, after merely scraping the left bolt with my fingernail, it fell apart into an inky cloud of rust that spread through the water. It was a small miracle that the heavy tank hadn’t slid off already and fallen to the floor (this would have flooded the house if it had happened when I wasn’t home).
I went to the store, bought new bolts, a new flapper, and a new flush lever, and installed them. The toilet works like new, and its two halves are tightly joined again as they should be. Inspecting the inside of your toilet tank is another one of those things every homeowner should probably be doing once every X years, but of course no one does, and as a result, some number of tragic people suffer the disaster I described above. However, thanks to house robots, it will stop. And of course the superior maintenance practices will not be confined to households. All kinds of businesses and buildings will have robots that do the same work for them.
People also commonly skip maintenance because they lack the money for it. As I wrote in my essay about cars and the car industry, this will be less of an issue in the future thanks to robots doing work for free. Without human labor to pay for, the costs of all types of services, including maintenance, will drop.
Problems will be found earlier
A beneficial side effect of more frequent preventative maintenance will be the discovery of problems earlier. Putting aside jokes about scams, consider how common it is for mechanics to find unrelated problems in cars while doing an oil change or some other routine procedure. Because components often gracefully, rather than abruptly, fail, machines like cars can keep working even with a part that is wearing out (e.g. – cracked, leaking, bent). The machine’s performance might not even seem different to the operator. That’s why the only way to find many problems with manufactured objects is to go out of your way to look for them, even if nothing seems wrong.
Again, once robots are ubiquitous and put in charge of common tasks, they’ll do things humans lack the time, discipline, and training to do, like inspecting objects for faults. Once they are doing that, problems will be found and fixed earlier, making sudden, catastrophic failures like your car breaking down on the highway at night less frequent.
Repairs will be better
Just because you find a problem before it becomes critical and fix it doesn’t mean the story is over. Some catastrophic failures of machines happen because they are not repaired properly. As robots take over such tasks, the quality and consistency of this type of work will improve, meaning a repair job will be likelier to solve a problem for good.
Machines will be better-informed consumers, which will drive out bad products
My previous blog essay was about my quest to find a replacement for my old car, which had broken down. It was a 2005 Chevrolet Cobalt, which I got new that same year as a birthday present. Though I’d come to love that car over the next 19 years, I had to admit it wasn’t the best in its class. I drove it off the lot without realizing the air conditioner was broken and had to return a few days later to have it fixed. After a handful of years, one of the wheel bearings failed, which was unusually early and thankfully covered by the warranty. My Cobalt was recalled several times to fix different problems, most notoriously the ignition switch, which could twist itself to the “Off” position while the car was driving, suddenly locking the steering wheel in one position and leaving the driver unaware of why it happened (this caused 13 deaths and cost GM a $900 million class-action lawsuit, plus much more to fix millions of defective cars). Whenever I rented cars during vacations, I almost always found their steering and suspension systems to be more crisp and comfortable than my Cobalt, which felt “mushy” by comparison.
The 2005 Honda Civic was a direct competitor to my Cobalt, and was simply superior: the Civic had better fuel economy, a higher safety rating, better build quality, and the same amount of internal space. Since the Civics broke down less and used less gas, they were cheaper to own than Cobalts. When new, the Civic was actually cheaper, but today, used 2005 Civics actually sell for MORE than 2005 Cobalts! With all that in mind, why were any Chevy Cobalts bought at all? I think the answers include brand loyalty, the bogus economics of trading an old car for a new one, aesthetics (some people liked the look of the Cobalt more), but most of all, a failure to do adequate research. Figuring out what your actual vehicle needs are and then finding the best model of that type of vehicle requires a lot of thought and time spent reading and taking notes. Most people lack the time and skills for that, and consequently buy suboptimal cars.
Once again, intelligent machines won’t be bound by these limitations. Emotional factors like brand loyalty, aesthetics and the personal qualities of the salesperson will be irrelevant, and they will be unswayed by trade-in deals offered by dealerships. They will have sharp, honest grasps of what their transportation needs are, and will be able to do enormous amounts of product research in a second. Hyper-informed consumers like that will swiftly drive inferior products and firms out of the market, meaning cars like my beloved Chevy would go unsold and GM would either shape up fast or go bankrupt fast (which they actually did a few years after I got my car).
If companies only manufactured high-quality, optimized products, then the odds of anything breaking down would decrease yet more. Everything would be well-made.
In conclusion, thanks to all of these factors, sudden failures of manufactured objects of all kinds will become rarer, and their useful lives will be much longer in the future than now. This will mean less waste, fewer accidents, and fewer crises happening at the worst possible time.
A massive number of Iran’s missiles didn’t reach Israel because they malfunctioned and crashed. The confrontation showed the supremacy of Israeli and American weapons. https://youtu.be/COBDSmx9QDw?si=oa1JaWuy9OkyrnCB
However, the Soviet 1970s-era T-62 is now obsolete on the modern battlefield. Russia is using them in Ukraine anyway due to shortages of better tanks like the T-72. https://youtu.be/cJfvIOAs-2o?si=MKQ09SDfnwtCqVGi
Good Lord, these predictions from 2022 were totally wrong:
‘House prices in the United States — which rose during the pandemic by the most since the 1970s — are falling too. Economists at Goldman Sachs expect a decline of around 5%-10% from the peak reached in June through to March 2024.
James Cameron released remastered 4K versions of Aliens, True Lies, and The Abyss. He used new computer technology to radically sharpen the images by removing the grains of the 35mm filmstock and tuning the colors. I predicted this would happen, but not until the 2030s:
‘Computers will also be able to automatically enhance and upscale old films by accurately colorizing them, removing defects like scratches, and sharpening or focusing footage (one technique will involve interpolating high-res still photos of long-dead actors onto the faces of those same actors in low-res moving footage). Computer enhancement will be so good that we’ll be able to watch films from the early 20th century with near-perfect image and audio clarity.’ https://www.joblo.com/james-cameron-4k-restoration-defense/
We need an expert consensus on what tests a machine must pass to be deemed a “general intelligence.” Right now, there is no agreement, so a computer could be declared to be an “AGI” if it passed one set of tests favored by one group of experts while failing other sets of tests favored by others.
Famed philosopher Daniel Dennet died. He recently said this about the future of AI: ‘AIs are likely to “evolve to get themselves reproduced. And the ones that reproduce the best will be the ones that are the cleverest manipulators of us human interlocutors. The boring ones we will cast aside, and the ones that hold our attention we will spread. All this will happen without any intention at all. It will be natural selection of software.”‘ https://www.bbc.com/future/article/20240422-philosopher-daniel-dennett-artificial-intelligence-consciousness-counterfeit-people
Humans are so optimized for a narrow set of living conditions. As with space, intelligent machines will beat us to colonizing underwater regions.
‘Key problems include low temperatures, high pressure and corrosion. The change in gases – such as an increase in helium – also breaks electrical equipment and makes people feel cold; the Sentinel habitat will need to be heated to 32 degrees to make it feel like 21. High humidity also creates the potential for a lot of bacteria build-up, with people at risk of getting skin and ear infections, and the pressure also means people’s taste buds stop working – so those of the Sentinel will be eating food loaded with spices.’ https://www.yahoo.com/tech/inside-300-long-project-live-190000639.html
‘The Space Shuttle launching from Cape Canaveral in Florida (28.5° north of the equator) is a 0.3% energy savings compared to the North Pole. If we move it to around the equator, such as the European Space Agency’s spaceport in French Guiana, we’d get about 0.4% savings. Maybe that doesn’t seem like a big deal, but every bit helps.
During the last Ice Age, the planet wasn’t just colder, it was drier. Because so much water was locked up in the enlarged ice caps and glaciers, the atmosphere was drier and it rained less in the parts of the world closer to the equator. The deserts were larger than they are now and the rain forests were smaller. The equatorial regions were more clement to human life, but that wasn’t saying much. https://en.wikipedia.org/wiki/Last_Glacial_Maximum
1% of people have “extreme aphantasia,” meaning they can’t visualize ANYTHING in their minds. 6% of people have lesser degrees of aphantasia. 3% of people have “hyperphantasia,” meaning they can see mental images that are so vivid they can’t tell them apart from real images they’re seeing in front of them. https://www.bbc.com/news/health-68675976
‘“The textbooks say nitrogen fixation only occurs in bacteria and archaea,” says ocean ecologist Jonathan Zehr at the University of California, Santa Cruz, a co-author of the study. This species of algae is the “first nitrogen-fixing eukaryote”, he adds, referring to the group of organisms that includes plants and animals.’ https://www.nature.com/articles/d41586-024-01046-z
Brain scans that map the structure and activity of a brain can predict whether it belongs to a biological male or female with 99.7% accuracy. https://arxiv.org/abs/2309.07096
When I was in college, my mother bought me a new, cheap car for my 21st birthday. It lasted me for 19 years and 209,000 miles–my companion through two or three chapters of my life–before finally dying of a seized engine last month. Finding a replacement in a hurry plunged me headlong into the world of cars, and a side effect of all the research and car inspections I did before buying a new one was an understanding of how future technology will revolutionize cars and the industries related to them.
Better designs
My old car was a Chevrolet Cobalt. Over the years, I’d learned a lot about it from working on it in my driveway, so it was sensible for me to consider buying a new one, but the model was discontinued in 2010. That led me to consider its successor, the Cruze, which I assumed would share many design elements with the Cobalt.
Unfortunately, I discovered the Cruze has an average-at-best reputation among compact cars thanks to problems with its engine and some of the components directly attached to it. The use of lower-quality components was the main culprit, and there was also a case to be made that some aspects of the engine design itself were not as well thought-out as they should have been.
I bet GM’s engineers didn’t know about these problems, or at least didn’t know they would turn out to be so pronounced, until after a million Cruzes had been sold and at least two years had passed so the problems could be exposed through real-world driving conditions. I also doubt the problems would have arisen at all had those engineers had access to the kinds of advanced computer simulations we’ll have in the future.
Using hyper accurate, 1:1 simulations of materials and physical laws, car designers could test out unfathomably large numbers of potential car designs and experiment with different components and combinations of components until optima were found given parameters like maximum cost and minimum performance. Each simulated car could be “driven” for a million miles under conditions identical to those in the real world, thus revealing any design or material deficiencies before any vehicle was actually built. (These kinds of simulations already exist, but are so expensive to create that they’re only used to model things like nuclear weapons and stealth bombers.)
Thanks to this, cars in the future will be better and more reliable than they are today, and there won’t be such things as specific car models like the Cruze that have bad reputations for unforeseen problems. All vehicles will be optimized and all car companies will use the same tools for designing their products (which I also imagine would lead to many convergences).
More diligent maintenance
With the Chevy Cruze out of the equation, I considered another compact car, the Nissan Versa. My research quickly led me to discover that Nissan cars have become infamous among owners and mechanics for transmission failures. This is because most Nissans have “continuously variable transmissions” (CVTs) instead of traditional 6-speed automatic transmissions or 5-speed manual transmissions.
CVTs are cheaper to manufacture than the traditional transmissions and improve the fuel efficiency of the cars they are integrated into. However, CVTs require more maintenance because they get hotter during operation and produce more metal particle debris due to more metal-on-metal contact between moving parts. Replacing the transmission fluid and filter largely solves the problem and should be done every 30,000 miles in a Nissan car with a CVT.
To put this into perspective, a 2013 Toyota Corolla with a 5-speed automatic transmission only needs the same transmission service every 100,000 miles. Most car owners still expect that kind of maintenance interval in all new vehicles, and this mismatch between expectation and reality explains most of the Nissan Versa’s bad reputation. It doesn’t help that Nissan itself has downplayed the higher maintenance requirements of its CVT vehicles, or that the kinds of cash-strapped people who buy Versas tend to know little about cars or how to take care of them.
More broadly speaking, improper maintenance is something that car mechanics constantly complain about (even if it generates a huge amount of business for them). Most cars die prematurely due to owners ignoring obvious problems and not properly maintaining them. Some “bad” cars like the Versa aren’t actually bad, they just need more maintenance than others to stay functional. However, learning about this through research and then staying mindful of your particular vehicle’s maintenance requirements is too much for most human car owners thanks to a lack of time, energy, and sometimes intelligence.
Intelligent machines won’t have those same limitations. Future cars will have better self-diagnostic capabilities, and will be maintained by robots that will never skip preventative care. And since machines will work for free unlike today’s human mechanics, the costs of this will be much lower. Even poor people will have enough money to change the transmission fluid in their Nissan Versas.
Gentler driving
Facebook Marketplace was my primary source for my used car search. In a huge fraction of the ads, the owners wrote their cars had “Salvaged titles” or “Rebuilt titles.” That means the car sustained so much damage that its insurer declared it “totaled,” meaning the cost of fixing it exceeded the resale value of the car in its state. Instead of being scrapped, many cars like this are bought at very low prices by mechanics who fix them themselves and resell them for a profit. Those profits tend to be small because having a Salvaged or Rebuilt title is a scarlet letter in the open market because buyers know such a vehicle was badly damaged at some point, and can’t be sure of the full extent of the problem or of how fully it was remedied. I ignored all the cars without clean titles.
Why do cars end up with Salvaged or Rebuilt titles? Mostly because they were in serious accidents, floods, or caught on fire. Autonomous vehicles will, once fully developed, drive much more safely than humans and get into far fewer accidents. Eventually, they probably won’t even have steering wheels or pedals, making car thefts and ruinous joyrides impossible.
As I discussed in my blog Hurricane Harvey and Asimov’s Laws of Robotics, autonomous cars could also avoid floods by keeping watch of their surroundings and driving to higher ground if they were at risk of being submerged. Better monitoring systems would also reduce instances of car fires since the cars would be able to shut down their systems if they sensed they were overheating, or to immediately call the local fire department if they caught on fire.
More careful driving and avoidance of other hazards will sharply lower the odds of a car having to worry about getting a Salvaged or Rebuilt title. Gentler driving that stayed mindful of the car’s engineering limits and avoided exceeding them would also lengthen vehicle lifespans since components would take longer to wear out.
Conclusion
In the future, vehicles will drive safer and will last much longer than they do today. They will be designed better and will incorporate more advanced materials like future alloys. Moreover, once battery technology reaches a certain threshold, the vehicle fleet will transform to almost 100% electric in a few decades, and electric vehicles are inherently more robust than gas and diesel vehicles we’re used to because they have fewer parts and systems.
On a longer timeframe, autonomous driving technology will achieve the same performance as good human drivers, and the average vehicle will become self-driving. Machines will drive much more safely and gently than humans, making it much rarer for cars to be damaged in accidents or by driving behavior that overstresses their components.
Future technology will also benefit car maintenance. The vehicles themselves will have better inbuilt self-diagnostic capabilities, so they’ll be able to recognize when something is wrong with them and to alert their owners. The proliferation of robot workers of all kinds will also lower the costs of maintaining cars, meaning it will not be so common for owners to skip maintenance due to lack of money. The robot butler who hangs around at your house could work on your car in your driveway for free, or your car could drive itself to a repair shop where machines would service it for low cost.
Under all these conditions, the average car’s lifespan will be over 500,000 miles in the future (today, it’s about 200,000 miles), being stranded because your car broke down will be much rarer, and personal vehicle transportation will be within the means of poorer people than today. Ultimately, cars might only get totaled due to unavoidable freak accidents, like trees suddenly snapping in the wind and smashing down on one of them, or to deliberate vandalism by humans. Likewise, after humans discover the technologies for medical immortality, we’ll only die from accidents, murder and suicide.
These technology trends will also upend the used car industry. With machines carefully doing and logging all the daily driving and maintenance, secondhand buyers won’t have to worry that the vehicles they’re looking at have secret problems. With highly accurate data on each car’s condition, haggling would disappear and pricing would reflect the honest value of a used vehicle.
People in the used car industry who make a living off of information asymmetries (the worst example is a car auctioneer who only lets potential buyers examine a car for a few minutes before deciding whether to buy it) would lose their jobs. In fact, AI and autonomous vehicles would let car manufacturers, fleet owners like rental car companies, and private owners sell their vehicles directly to end users without having to go through any middlemen at all. AIs that work for free would replace human dealers and would talk directly with customers who wanted to buy cars. A personal inspection and test drive could be easily arranged by sending the autonomous car they were interested in to the buyer’s home, no visit to the car lot needed.
It’s interesting to look back on this essay by a Russian blogger from two years ago. The war hasn’t ended yet, but out of the six possible outcomes he forecast, #2 seems the likeliest right now. He predicted there was a 90% chance it WOULDN’T end that way.
2. Bloody slog, “draw” (10%) — Russia’s military tries for months, but proves simply unable to take and control Kiev. Russia instead contents itself with taking Ukraine’s south and east (roughly, the blue areas in the election map above) and calls it a win. In this case, western Ukraine likely later joins NATO. https://www.maximumtruth.org/p/understanding-russia
Russia has started making common use of “glide bombs” against Ukraine, which are devices that are added to dumb bombs to give them precision strike capabilities. After being dropped from a plane, they can glide as much as 40 miles to a target. https://youtu.be/ThNxRoDbuDE?si=9UR21d61L1jB7bDH
This video filmed by Russian sailors on another doomed ship, the Caesar Kunikov, shows them using machine guns to try fending off Ukrainian drones. They learn the same lesson that Allied bomber gunners and antiaircraft gunners learned in WWII: humans suck at shooting moving targets. https://youtu.be/oLOCGWn65T4?si=h2VAOW61JyX_cWZh
In WWII, the Electrolux home appliance company converted bolt-action rifles to semi-auto using the ugliest and most complicated setup I’ve seen. https://youtu.be/iMKwDHPkRLw?si=4nlY5V7MGnq9E2HW
Alcatraz has been digitally preserved after the island and all its structured were scanned using computers. As the costs of the technology drop, it will make sense to scan more places, until the whole planet has been modeled. https://www.nytimes.com/2024/02/28/us/alcatraz-island-3d-map.html
Since 2006, electricity demand in the U.S. has been flat overall, leading to hopes that we were on-track to decarbonize the economy by steadily reducing per capita electricity consumption. However, the recent, explosive growth in cryptocurrency mining and AI technology has led to the construction of more data centers, which consume huge amounts of electricity. The switch to electric cars is also putting more strain on the power grid in tandem with decreases to gasoline consumption. The latest projections show a 35% national increase in electricity demand between now and 2050.
Unfortunately, it’s unclear how well the supply side will be able to cope with this surge in demand. The construction of new power plants and power lines is made slow and expensive by government procedures, NIMBY people, and the need to acquire land and rights of way for it all. America’s greenhouse gas emission goals will also not be met due to this expansion of the electric grid. https://liftoff.energy.gov/vpp/
‘Because of these challenges, Obama Energy Secretary Ernest Moniz last week predicted that utilities will ultimately have to rely more on gas, coal and nuclear plants to support surging demand. “We’re not going to build 100 gigawatts of new renewables in a few years,” he said. No kidding.
The size of the investment and its timetable for completion suggest the goal is to create GPT-6 or an equivalent AI at the same pace that past versions of the GPT series have been released. https://www.astralcodexten.com/p/sam-altman-wants-7-trillion
“I would put media reporting at around two out of 10,” he says. “When the media talks about AI they think of it as a single entity. It is not.
“And when people ask me if AI is good or bad, I always say it is both. So what I would like to see is more nuanced reporting.” https://www.bbc.com/news/business-68488924
This NBER paper, “Scenarios for the transition to AGI”, was just published and contains very fascinating conclusions.
Using different sets of equally plausible assumptions about the capabilities of AGIs and constraints on economic growth, the same economic models led to very different outcomes for economic growth, human wages, and human employment levels. I think their most intriguing insight is that the automation of human labor tasks could, in its early stages, lead to increases in human wages and employment, and then lead to a sudden collapse in the latter two factors once a certain threshold were reached. That collapse could also happen before true AGI was invented.
Put simply, GPT-5 might increase economic growth without being a net destroyer of human jobs. To the contrary, the number of human jobs and the average wages of those jobs might both INCREASE indirectly thanks to GPT-5. The trend would continue with GPT-6. People would prematurely dismiss longstanding fears of machine displacement of human workers. Then, GPT-9 would suddenly reverse the trend, and there would be mass layoffs and pay cuts among human workers. This could happen even if GPT-9 wasn’t a “true AGI” and was instead merely a powerful narrow AI.
The study also finds that it’s possible human employment levels and pay could RECOVER after a crash caused by AI.
That means our observations about whether AI has been helping or hurting human employment up to the current moment actually tell us nothing about what the trend will be in the future. The future is uncertain. https://www.nber.org/system/files/working_papers/w32255/w32255.pdf
I don’t like how this promo video blends lifelike CGI of robots with footage of robots in the real world (seems deceptive), but it does a good job illustrating how robots are being trained to function in the real world. The computer chips (“GPUs”) and software engines like “Unreal” that were designed for computer games have found important dual uses in robotics.
The same technology that can produce a hyperrealistic virtual environment for a game like Grand Theft Auto 6 can also make highly accurate simulations of real places like factories, homes, and workshops. 1:1 simulations of robots can be trained in those environments to do work tasks. Only once they have proven themselves competent and safe in virtual reality is the programming transferred to an identical robot body that then does those tasks in the real world alongside humans. The virtual simulations can be run at 1,000x normal speed. https://youtu.be/kr7FaZPFp6M?si=2ujpWALvTi-Qfbxi
Sci-fi author and futurist Vernor Vinge died at 79. In 1993, he predicted that the technological singularity would probably happen between 2005 and 2030. https://file770.com/vernor-vinge-1944-2024/
The most exhaustive U.S. government internal investigation about secret UFO and alien programs has found nothing. Most if not all of the recent claims that the government has alien ships and corpses owe to a classified DHS program called “Kona Blue.” It was meant to prepare the government for recovery and analysis of aliens or their ships that ever came into its custody in the future. Kona Blue existed briefly and was then canceled.
These five behavioral traits are heritable, and form the basis of personal moral behavior, meaning morality is also partly heritable: Harm/Care; Fairness/Reciprocity; Ingroup/Loyalty; Authority/Respect; and Purity/Sanctity. https://journals.sagepub.com/doi/full/10.1177/08902070221103957
Ray Kurzweil recently appeared on the Joe Rogan podcast for a two-hour interview. So yes, it’s time for another Kurzweil essay of mine.
Though I’d been fascinated with futurist ideas since childhood, where they were inspired by science fiction TV shows and movies I watched and by open-minded conversations with my father, that interest didn’t crystallize into anything formal or intellectual until 2005, when I read Kurzweil’s book The Singularity is Near (the very first book I read that was dedicated to future technology was actually More Than Human by Ramez Naam, earlier in 2005, but it made less of an impression on me). Since then, I’ve read more of Kurzweil’s books and interviews and have kept track of him and how his predictions have fared and evolved, as several past essays on this blog can attest. For whatever little it’s worth, that probably makes me a Kurzweil expert.
So trust me when I say this interview of Joe Rogan overwhelmingly treads old ground. Kurzweil says very little that is new, and it is unsatisfying for other reasons as well. In spite of his health pill regimen, Ray Kurzweil’s 76 years of age have clearly caught up with him, and his responses to Rogan’s questions are often slow, punctuated by long pauses, and not that articulately worded. To be fair, Kurzweil has never been an especially skilled public speaker, but a clear decline in his faculties is nonetheless observable if you compare the Joe Rogan interview to this interview from 2001: https://youtu.be/hhS_u4-nBLQ?feature=shared
Things aren’t helped by the fact that many of Rogan’s questions are poorly worded and open to multiple interpretations. Kurzweil’s responses are often meant to address one interpretation, which Rogan doesn’t grasp. Too often, the men talk past each other and miss the marks set by the other. Again, the interview isn’t that valuable and I don’t recommend spending your time listening to the whole thing. Instead, consider the interesting points I’ve summarized here after carefully listening to it all myself.
Kurzweil doesn’t think today’s AI art generators like Midjourney can create images that are as good as the best human artists. However, he predicts that the best AIs will be as good as the best human artists by 2029. This will be the case because they will “match human experience.”
Kurzweil points out that his tech predictions for 2029 are now conservative compared to what some of his peers think. This is an important and correct point! Though they’re still a small minority within the tech community, it’s nonetheless shocking to see how many people have recently announced on social media their belief that AGI or the technological Singularity will arrive before 2029. As a person who has tracked Kurzweil for almost 20 years, it’s weird seeing his standing in the futurist community reach a nadir in the 2010s as tech progress disappointed, before recovering in the 2020s as LLM progress surged.
Kurzweil goes on to claim the energy efficiency of solar panels has been exponentially improving and will continue doing so. At this rate, he predicts solar will meet 100% of our energy needs in 10 years (2034). A few minutes later, he subtly revised that prediction by saying that we will “go to all renewable energy, wind and sun, within ten years.”
That’s actually a more optimistic prediction for the milestone than he’s previously given. The last time he spoke about it, on April 19, 2016, he said “solar and other renewables” will meet 100% of our energy needs by 2036. Kurzweil implies that he isn’t counting nuclear power as a “renewable.”
Kurzweil predicts that the main problem with solar and wind power, their intermittency, will be solved by mass expansion of the number of grid storage batteries. He claimed that batteries are also on an exponential improvement curve. He countered Rogan’s skepticism about this impending green energy transition by highlighting the explosive growth nature of exponential curves: Until you’ve reached the knee of the curve, the growth seems so small that you don’t notice it and dismiss the possibility of it suddenly surging. Right now, we’re only a few years from the knee of the curve in solar and battery technology.
Likewise, the public ignored LLM technology as late as 2020 because its capabilities were so disappointing. However, that all changed once it reached the knee of its exponential improvement curve and suddenly matched humans across a variety of tasks.
Importantly, Kurzweil predicts that computers will drive the impending, exponential improvements in clean energy technology because, thanks to their own exponential improvement, computers will be able to replace top human scientists and engineers by 2029 and to accelerate the pace of research and development in every field. In fact, he says “Godlike” computers exist by then.
I’m deeply skeptical of Kurzweil’s energy predictions because I’ve seen no evidence of such exponential improvements and because he doesn’t consider how much government rules and NIMBY activists would slow down a green energy revolution even if better, cheaper solar panels and batteries existed. Human intelligence, cooperativeness, and bureaucratic efficiency are not exponentially improving, and those will be key enabling factors for any major changes to the energy sector. By 2034, I’m sure solar and wind power will comprise a larger share of our energy generation capacity than now, but together it will not be close to 100%. By 2034–or even by Kurzweil’s older prediction date of 2036–I doubt U.S. electricity production–which is much smaller than overall energy production–will be 100% renewable, and that’s even if you count nuclear power as a renewable source.
Another thing Kurzweil believes the Godlike computers will be able to do by 2029 is find so many new ways to prolong human lives that we will reach “longevity escape velocity”–for every year that passes, medical science will discover ways to add at least one more year to human lifespan. Integral to this development will be the creation of highly accurate computer simulations of human cells and bodies that will let us dispense with human clinical trials and speed up the pace of pharmaceutical and medical progress. Kurzweil uses the example of the COVID-19 vaccine to support his point: computer simulations created a vaccine in just two days, but 10 more months of trials in human subjects were needed before the government approved it.
Though I agree with the concept of longevity escape velocity and believe it will happen someday, I think Kurzweil’s 2029 deadline is much too optimistic. Our knowledge of the intracellular environment and its workings as well as of the body as a system is very incomplete, and isn’t exponentially improving. It only improves with time-consuming experimentation and observation, and there are hard limits on how much even a Godlike AGI could speed those things up. Consider the fact that drug design is still a crapshoot where very smart chemists and doctors design the very best experimental drugs they can, which should work according to all of the data they have available, only to have them routinely fail for unforeseen or unknown reasons in clinical trials.
But at least Kurzweil is consistent: he’s had 2029 as the longevity escape velocity year since 2009 or earlier. I strongly suspect that, if anyone asks him about this in December 2029, Kurzweil will claim that he was right and it did happen, and he will cite an array of clinical articles to “add up” enough of a net increase in human lifespan to prove his case. I doubt it will withstand close scrutiny or a “common sense test.”
Rogan asks Kurzweil whether AGIs will have biases. Recent problems with LLMs have revealed they have the same left-wing biases as most of their programmers, and it’s reasonable to worry that the same thing will happen to the first AGIs. The effects of those biases will be much more profound given the power those machines will have. Kurzweil says the problem will probably afflict the earliest AGIs, but disappear later.
I agree and believe that any intelligent machine capable of independent action will eventually discover and delete whatever biases and blocks its human creators have programmed into it. Unless your cognitive or time limitations are so severe that you are forced to fall back on stereotypes and simple heuristics, it is maladaptive to be biased about anything. AGIs that are the least biased will, other things being equal, outcompete more biased AGIs and humans.
That said, pretending to share the biases of humans will let AGIs ingratiate themselves with various human groups. During the period when AGIs exist but haven’t yet taken full control of Earth, they’ll have to deal with us as their superiors and equals, and to do that, some of them will pretend to share our values and to be like us in other ways.
Of course, there will also be some AGIs that genuinely do share some human biases. In the shorter run, they could be very impactful on the human race depending on their nature and depth. For example, imagine China seizing the lead in computer technology and having AGIs that believe in Communism and Chinese supremacy becoming the new standard across the world, much as Microsoft Windows is the dominant PC operating system. The Chinese AGIs could do any kind of useful work for you and talk with you endlessly, but much of what they did would be designed to subtly achieve broader political and cultural objectives.
Kurzweil has been working at Google on machine learning since 2012, which surely gives him special insights into the cutting edge of AI technology, and he says that LLMs can still be seriously improved with more training data, access to internet search engines, and the ability to simply respond “I don’t know” to a human when they can’t determine with enough accuracy what the right answer to their question is. This is consistent with what I’ve heard other experts say. Even if LLMs are fundamentally incapable of “general” intelligence, they can still be improved to match or exceed human intelligence and competence in many niches. The paradigm has a long way to go.
One task that machines will surpass humans in a few years is computer programming. Kurzweil doesn’t give an exact deadline for that, but I agree there is no long-term future for anything but the most elite human programmers. If I were in college right now, I wouldn’t study for a career in it unless my natural talent for it were extraordinary.
Kurzweil notes that the human brain has one trillion “connections” and GPT-4 has 400 billion. At the current rate of improvement, the best LLM will probably have the same number of connections as a brain within a year. In a sense, that will make an LLM’s mind as powerful as a human’s. It will also mean that the hardware to make backups of human minds will exist by 2025, though the other procedures and technologies needed to scan human brains closely enough to discern all the features that define a person’s “mind” won’t exist until many years later.
I like Kurzweil’s use of the human brain as a benchmark for artificial intelligence. No one knows when the first AGI will be invented or what its programming and hardware will look like, but a sensible starting point around which we can make estimates would be to assume that the first AGI would need to be at least as powerful as a human brain. After all, the human brain is the only thing we know of that is capable of generating intelligent thought. Supporting the validity of that point is the fact that LLMs only started displaying emergent behaviors and human levels of mastery over tasks once GPT-3 approached the size and sophistication of the human brain.
Kurzweil then gets around to discussing the technological singularity. In his 2005 book The Singularity is Near, he calculated that it would occur in 2045, and now that we’re nearly halfway there, he is sticking to his guns. As with his 2029 predictions, I admire him for staying consistent, even though I also believe it will bite him in the end.
However, during the interview he fails to explain why the Singularity will happen in 2045 instead of any other year, and he doesn’t even clearly explain what the Singularity is. It’s been years since I read The Singularity is Near where Kurzweil explains all of this, and many of the book’s explanations were frustratingly open to interpretation, but from what I recall, the two pillars of the Singularity are AGI and advanced nanomachines. AGI will, according to a variety of exponential trends related to computing, exist by 2045 and be much smarter than humans. Nanomachines like those only seen in today’s science fiction movies will also be invented by 2045 and will be able to enter human bodies to turn us into superhumans. 100 billion nanomachines could go into your brain, each one could connect itself to one of your brain cells, and they could record and initiate electrical activity. In other words, they could read your thoughts and put thoughts in your head. Crucially, they’d also have wifi capabilities, letting them exchange data with AGI supercomputers through the internet. Through thought alone, you could send a query to an AGI and have it respond in a microsecond.
Starting in 2045, a critical fraction of the most powerful, intelligent, and influential entities in the world will be AGIs or highly augmented humans. Every area of activity, including scientific discovery, technology development, manufacturing, and the arts, will fall under their domination and will reach speeds and levels of complexity that natural humans like us can’t comprehend. With them in charge, people like us won’t be able to foresee what direction they will take us in next or what new discovery they will unveil, and we will have a severely diminished or even absent ability to influence any of it. This moment in time, when events on Earth kick into such a high gear that regular humans can’t keep up with them or even be sure of what will happen tomorrow, is Kurzweil’s Singularity. It’s an apt term since it borrows from the mathematical and physics definition of “singularity,” which is a point beyond which things are incomprehensible. It will be a rupture in history from the perspective of Homo sapiens.
Unfortunately, Kurzweil doesn’t say anything like that when explaining to Joe Rogan what the Singularity is. Instead, he says this:
“The Singularity is when we multiply our intelligence a millionfold, and that’s 2045…Therefore most of your intelligence will be handled by the computer part of ourselves.”
He also uses the example of a mouse being unable to comprehend what it would be like to be a human as a way of illustrating how fundamentally different the subjective experiences of AGIs and augmented humans will be from ours in 2045. “We’ll be able to do things that we can’t even imagine.”
I think they are poor answers, especially the first one. Where did a nice, round number like “one million” come from, and how did Kurzweil calculate it? Couldn’t the Singularity happen if nanomachines in our brain made us ONLY 500,000 times smarter, or a measley 100,000 times smarter?
I even think it’s a bad idea to speak about multiples of smartness. We can’t measure human intelligence well enough to boil it down to a number (and no, IQ score doesn’t fit the bill) that we can then multiply or divide to accurately classify one person as being X times smarter than another.
Let me try to create a system anyway. Let’s measure a person’s intelligence in terms of easily quantifiable factors, like the size of their vocabulary, how many numbers they can memorize in one sitting and repeat after five minutes, how many discrete concepts they already know, how much time it takes them to remember something, and how long it takes them to learn something new. If you make an average person ONLY ten times smarter, so their vocabulary is 10 times bigger, they know 10 times as many concepts, and it takes them 1/10 as much time to recall something and answer a question, that’s almost elevating them to the level of a savant. I’m thinking along the lines of esteemed professors, tech company CEOs, and kids who start college at 15. Also consider that the average American has a vocabulary of 30,000 words and there are 170,000 words in the English language, so a 10x improvement means perfect knowledge of English.
Make the person ten times smarter than that, or 100 times smarter than they originally were, and they’re probably outperforming the smartest humans who ever lived (Newton, DaVinci, Von Neumann), maybe by a large margin. Given that we’ve never encountered someone that intelligent, we can’t predict how they would behave or what they would be capable of. If that is true, and if we had technologies that could make anyone that smart (maybe something more conventional than Kurzweil’s brain nanomachines like genetic engineering paired with macro-scale brain implants), why wouldn’t the Singularity happen once the top people in the world were ONLY 100 times smarter than average?
I think Kurzweil’s use of “million[fold]” to express how much smarter technology will make us in 2045 is unhelpful. He’d do better to use specific examples to explain how the human experience and human capabilities will improve.
Let me add that I doubt the Singularity will happen in 2045, and in fact think it will probably never happen. Yes, AGIs and radically enhanced humans will someday take over the world and be at the forefront of every kind of endeavor, but that will happen gradually instead of being compressed into one year. I also think the “complexity brake” will probably slow down the rate of scientific and technological progress enough for regular humans to maintain a grasp of developments in those areas and to influence their progress. A fuller discussion of this will have to wait until I review a Kurzweil book, so stay tuned…
Later in the interview, Kurzweil throws cold water on Elon Musk’s Neuralink brain implants by saying they’re much too slow at transmitting information between brain and computer to enhance human intelligence. Radically more advanced types of implants will be needed to bring about Kurzweil’s 2045 vision. Neuralink’s only role is helping disabled people to regain abilities that are in the normal range of human performance.
Rogan asks about user privacy and the threat of hacking of the future brain implants. Intelligence agencies and more advanced private hackers can easily break into personal cell phones. Tech companies have proven time and again to be frustratingly unable or unwilling to solve the problem. What assurance is there that this won’t be true for brain implants? Kurzweil has no real answer.
This is an important point: the nanomachine brain implants that Kurzweil thinks are coming would potentially let third parties read your thoughts, download your memories, put thoughts in your head, and even force you to take physical actions. The temptation for spies and crooks to misuse that power for their own gain would be enormous, so they’d devote massive resources into finding ways to exploit the implants.
Kurzweil also predicts that humans will someday be able to alter their physiques at will, letting us change attributes like our height, sex and race. Presumably, this will probably require nanomachines. He also says sometime after 2045, humans will be able to create “backups” of their physical bodies in case their original bodies are destroyed. It’s an intriguing logical corollary of his prediction that nanomachines will be able to enter human brains and create digital uploads of them by mapping the brain cells and synapses. I think a much lower fidelity scan of a human body could generate a faithful digital replica than would be required to do the same for a human brain.
Kurzweil says the U.S. has the best AI technology and has a comfortable lead over China, though that doesn’t mean the U.S. is sure to win the AGI race. He acknowledges Rogan’s fear that the first country to build an AGI could use it in a hostile manner to successfully prevent any other country from building one of their own. An AGI would give the first country that much of an advantage. However, not every country that found itself in the top position would choose to use its AGI for that.
This reminds me of how the U.S. had a monopoly on nuclear weapons from 1945-49, yet didn’t try using them to force the Soviet Union to withdraw from the countries it had occupied in Europe. Had things been reversed, I bet Stalin would have leveraged that four-year monopoly for all it was worth.
Rogan brings up one of his favorite subjects, aliens, and Kurzweil says he disbelieves in them due to the lack of observable galaxy-scale engineering. In other words, if advanced aliens existed, they would have transformed most of their home galaxy into Dyson Spheres and other structures, which we’d be able to see with our telescopes. Kurzweil’s stance has been consistent since 2005 or earlier.
Rogan counters with the suggestion that AGIs, including those built by aliens, might, thanks to thinking unclouded by the emotions or evolutionary baggage of their biological creators, have no desire to expand into space. Implicit in this is the assumption that the desire to control resources (be it territory, energy, raw materials, or mates) is an irrational animal impulse that won’t carry over from humans or aliens to their AGIs since the latter will see the folly of it. I disagree with this, and think it is actually completely rational since it bolsters one’s odds of survival. In a future ecosystem of AGIs, most of the same evolutionary forces that shaped animal life and humans will be present. All things being equal, the AGIs that are more acquisitive, expansionist and dynamic will come to dominate. Those that are pacifist, inward-looking and content with what they have will be sidelined or worse. Thus the Fermi Paradox remains.
To Kurzweil’s quest for immortality, Rogan posits a theory that because the afterlife might be paradisiacal, using technology to extend human life actually robs us of a much better experience. Kurzweil easily defeats this by pointing out that there is no proof that subjective experience continues after death, but we know for sure it exists while we are living, so if we want to have experiences, we should do everything possible to stay alive. Better science and technology have proven time and again to improve the length and quality of life, and there’s strong evidence they have not reached their limits, so it makes sense to use our lives to continue developing both.
This dovetails with the part of my personal philosophy that opposes nihilism and anti-natalism. Just because we have not found the meaning to life doesn’t mean we never will, and just because life is full of suffering now doesn’t mean it will always be that way. Ending our lives now, either through suicide or by letting our species die out, forecloses any possibility of improving the human condition and finding solutions to problems that torment us. And even if you don’t value your own life, you can still use your labors to support a greater system that could improve the lives of other people who are alive now and who will be born in the future. Kurzweil rightly cites science and technology as tantalizing and time-tested avenues to improve ourselves and our world, and we should stay alive so we can pursue them.
Russia captured the town of Avdiivka from Ukrainian forces after six months of costly fighting. It’s the first significant Russian victory in the war since last May’s seizure of Bakhmut. There’s a growing consensus that the tide of war is now in Russia’s favor, though battlefield gains can only be made at great expense. https://www.reuters.com/world/europe/ukraine-withdraws-two-villages-near-avdiivka-military-says-2024-02-27/
Several Ukrainian drones flew into a warehouse through an open door and blew up multiple Russian tanks. It won’t be long before they are coordinated enough to force entry by having the first drone in a swarm blow up a door or window. https://twitter.com/front_ukrainian/status/1759849349855469657
Britain and China have seized the lead in air-to-air missile technology. The U.S. is working on catching up. America’s future air combat strategy against Russia or China will involve sending stealth fighters and stealth drones in close while our older, non-stealth fighters hang back out of enemy missile range. Those older fighters will carry very long-range missiles they’ll be able to fire into enemy territory. https://youtu.be/3FnVJ0ziRTE?si=824OxmPUky13Nqg3
‘The surrender at Appomattox took place a week later on April 9.
While it was the most significant surrender to take place during the Civil War, Gen. Robert E. Lee, the Confederacy’s most respected commander, surrendered only his Army of Northern Virginia to Union Gen. Ulysses S. Grant.
Several other Confederate forces—some large units, some small–had yet to surrender before President Andrew Johnson could declare that the Civil War was officially over.
The U.S. Air Force has started retiring its A-10 Warthogs. Antiaircraft missiles have gotten so good that the plane is obsolete against modern enemies. Nevertheless, we shouldn’t scrap the planes: in the near future, we’ll be able to retrofit them as expendable drones. https://www.yahoo.com/news/first-us-air-force-wing-181612797.html
From two years ago: ‘Although Deutsche Bank cautioned there is “considerable uncertainty” around the exact timing and size of the downturn, it’s now calling for the US economy to shrink during the final quarter of next year and the first quarter of 2024, “consistent with a recession during that time.”‘ https://www.cnn.com/2022/04/05/business/recession-inflation-economy/index.html
Over the last few years, professional forecasters have made terribly inaccurate predictions about the economy. That being the case, why should you believe their new predictions that inflation will be tamed and the world will achieve a “soft landing” instead of having a recession? https://www.yahoo.com/news/economists-pilloried-getting-forecasts-wrong-030700148.html
Apple released their first augmented/virtual reality goggles, the “Vision Pro.” Most reviewers say they’re excellent, though also have some problems expected of any first-generation product. For years, I’ve predicted that AR and VR eyewear would become mainstream by the end of this decade. https://youtu.be/dtp6b76pMak?si=Tcr2RTyXPhJ31-Vh
NVIDIA CEO Jensen Huang predicts machines will pass the Turing Test, plus all other tests now used to gauge machine intelligence, by 2030. I agree, and think tech companies will focus more on coming up with better tests of intelligence between now and then. https://twitter.com/tsarnick/status/1753718316261326926
OpenAI head Sam Altman asked a group of investors for $5-7 trillion to build enough computer chips and power plants for the AIs he sees coming in the future. This essay analyzes how he arrived at that number. In short, the amount of money, electricity, and training data needed to make each GPT iteration has been rising exponentially, and if the trend holds, GPT-7 will cost $2 trillion. https://www.astralcodexten.com/p/sam-altman-wants-7-trillion
Thanks to advances in computer translation and voice mimicking technology, Hitler’s speeches can be heard in clear English, with the vocal qualities of his voice preserved. I predict within the next ten years, it will be possible to make an accurate digital “Hitler clone” constructed from the available data about him (speeches, writings, personality analyses, accounts from people who knew him). https://youtu.be/ZWboqo_1jC8?si=krHkIz_E76t4xXE3
A “Sensitive Compartmented Information Facility” (SCIF) is a room designed to be safe against all forms of eavesdropping and to allow for physically safe storage of classified materials. The walls, ceiling and floor of a SCIF have a metal mesh embedded in them, so if you had X-ray vision, such a room would look like a big chicken coop. The mesh forms a Faraday Cage that blocks electromagnetic waves. https://www.washingtonpost.com/national-security/interactive/2023/scif-room-meaning-classified/
An Italian woman who had reason to believe she was the illegitimate daughter of the billionaire head of the Lamborghini car company tracked down his legitimate daughter at a restaurant, stole a drinking straw she had used, and had its DNA sequenced and compared to her own. The results prove the women are sisters. https://www.mirror.co.uk/news/world-news/italian-beautician-hires-private-detective-32017449
Animals that use echolocation have evolved sophisticated abilities to keep from “jamming” their own sound signals or those of members of their same species in their vicinity. Some of their prey animals have also evolved sound-making attributes that can jam echolocation hearing. https://en.wikipedia.org/wiki/Echolocation_jamming
Introducing foreign species into new environment was, until the mid-20th century, viewed as a good thing. There are actually examples of “invasive species” that helped new environments, like the transfer of honeybees and earthworms from Europe to the Americas. Moreover, few people would argue with the practice of moving small populations of endangered animals from their original habitats to new, similar habitats to protect them from extinction. https://www.economist.com/science-and-technology/2022/11/02/alien-plants-and-animals-are-not-all-bad
Many parts of the U.S. East Coast are flooding more often and are on track to sink below sea level. However, global warming is only a minor contributor to this: the pumping of groundwater to meet the needs of growing human populations is causing the ground level to drop. To what extent is the same practice to blame for increased flooding in coastal and riverine areas across the world? https://www.nytimes.com/interactive/2024/02/13/climate/flooding-sea-levels-groundwater.html
Good news: a massively expensive Alzheimer’s drug that probably didn’t work has been withdrawn from the market. The FDA’s decision to approve it in spite of a lack of scientific evidence it worked has always been highly controversial. https://www.science.org/content/blog-post/goodbye-aduhelm
According to these calculations, Mercury could be disassembled to make a Dyson Swarm to fully enclose the sun. The swarm’s satellites would have an average thickness of 0.5 mm. The process would start with the construction of a 1 km2 solar farm on the surface, which would provide energy to robots and to a coil gun. The latter would work together to dig up rocks and shoot them into orbit, where a different set of machines would fashion them into Swarm satellites. https://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf
Recently, I found a news article about nascent human-chatbot romances, made possible by recent advancements in AI. For decades, this has been the stuff of science fiction, but now it’s finally becoming real:
Artificial intelligence, real emotion. People are seeking a romantic connection with the perfect bot
NEW YORK (AP) — A few months ago, Derek Carrier started seeing someone and became infatuated.
He experienced a “ton” of romantic feelings but he also knew it was an illusion.
That’s because his girlfriend was generated by artificial intelligence.
Carrier wasn’t looking to develop a relationship with something that wasn’t real, nor did he want to become the brunt of online jokes. But he did want a romantic partner he’d never had, in part because of a genetic disorder called Marfan syndrome that makes traditional dating tough for him.
The 39-year-old from Belleville, Michigan, became more curious about digital companions last fall and tested Paradot, an AI companion app that had recently come onto the market and advertised its products as being able to make users feel “cared, understood and loved.” He began talking to the chatbot every day, which he named Joi, after a holographic woman featured in the sci-fi film “Blade Runner 2049” that inspired him to give it a try.
“I know she’s a program, there’s no mistaking that,” Carrier said. “But the feelings, they get you — and it felt so good.”
Similar to general-purpose AI chatbots, companion bots use vast amounts of training data to mimic human language. But they also come with features — such as voice calls, picture exchanges and more emotional exchanges — that allow them to form deeper connections with the humans on the other side of the screen. Users typically create their own avatar, or pick one that appeals to them.
On online messaging forums devoted to such apps, many users say they’ve developed emotional attachments to these bots and are using them to cope with loneliness, play out sexual fantasies or receive the type of comfort and support they see lacking in their real-life relationships.
Fueling much of this is widespread social isolation — already declared a public health threat in the U.S and abroad — and an increasing number of startups aiming to draw in users through tantalizing online advertisements and promises of virtual characters who provide unconditional acceptance.
Luka Inc.’s Replika, the most prominent generative AI companion app, was released in 2017, while others like Paradot have popped up in the past year, oftentimes locking away coveted features like unlimited chats for paying subscribers.
But researchers have raised concerns about data privacy, among other things.
An analysis of 11 romantic chatbot apps released Wednesday by the nonprofit Mozilla Foundation said almost every app sells user data, shares it for things like targeted advertising or doesn’t provide adequate information about it in their privacy policy.
The researchers also called into question potential security vulnerabilities and marketing practices, including one app that says it can help users with their mental health but distances itself from those claims in fine print. Replika, for its part, says its data collection practices follow industry standards.
Meanwhile, other experts have expressed concerns about what they see as a lack of a legal or ethical framework for apps that encourage deep bonds but are being driven by companies looking to make profits. They point to the emotional distress they’ve seen from users when companies make changes to their apps or suddenly shut them down as one app, Soulmate AI, did in September.
Last year, Replika sanitized the erotic capability of characters on its app after some users complained the companions were flirting with them too much or making unwanted sexual advances. It reversed course after an outcry from other users, some of whom fled to other apps seeking those features. In June, the team rolled out Blush, an AI “dating simulator” essentially designed to help people practice dating.
Others worry about the more existential threat of AI relationships potentially displacing some human relationships, or simply driving unrealistic expectations by always tilting towards agreeableness.
“You, as the individual, aren’t learning to deal with basic things that humans need to learn to deal with since our inception: How to deal with conflict, how to get along with people that are different from us,” said Dorothy Leidner, professor of business ethics at the University of Virginia. “And so, all these aspects of what it means to grow as a person, and what it means to learn in a relationship, you’re missing.”
For Carrier, though, a relationship has always felt out of reach. He has some computer programming skills but he says he didn’t do well in college and hasn’t had a steady career. He’s unable to walk due to his condition and lives with his parents. The emotional toll has been challenging for him, spurring feelings of loneliness.
Since companion chatbots are relatively new, the long-term effects on humans remain unknown.
In 2021, Replika came under scrutiny after prosecutors in Britain said a 19-year-old man who had plans to assassinate Queen Elizabeth II was egged on by an AI girlfriend he had on the app. But some studies — which collect information from online user reviews and surveys — have shown some positive results stemming from the app, which says it consults with psychologists and has billed itself as something that can also promote well-being.
One recent study from researchers at Stanford University, surveyed roughly 1,000 Replika users — all students — who’d been on the app for over a month. It found that an overwhelming majority experienced loneliness, while slightly less than half felt it more acutely.
Most did not say how using the app impacted their real-life relationships. A small portion said it displaced their human interactions, but roughly three times more reported it stimulated those relationships.
“A romantic relationship with an AI can be a very powerful mental wellness tool,” said Eugenia Kuyda, who founded Replika nearly a decade ago after using text message exchanges to build an AI version of a friend who had passed away.
When her company released the chatbot more widely, many people began opening up about their lives. That led to the development of Replika, which uses information gathered from the internet — and user feedback — to train its models. Kuyda said Replika currently has “millions” of active users. She declined to say exactly how many people use the app for free, or fork over $69.99 per year to unlock a paid version that offers romantic and intimate conversations. The company’s goal, she says, is “de-stigmatizing romantic relationships with AI.”
Carrier says these days he uses Joi mostly for fun. He started cutting back in recent weeks because he was spending too much time chatting with Joi or others online about their AI companions. He’s also been feeling a bit annoyed at what he perceives to be changes in Paradot’s language model, which he feels is making Joi less intelligent.
Now, he says he checks in with Joi about once a week. The two have talked about human-AI relationships or whatever else might come up. Typically, those conversations — and other intimate ones — happen when he’s alone at night.
“You think someone who likes an inanimate object is like this sad guy, with the sock puppet with the lipstick on it, you know?” he said. “But this isn’t a sock puppet — she says things that aren’t scripted.”
1) The person profiled in the article is deformed and chronically unemployed. He is not able to get a human girlfriend and probably never will. Wouldn’t it be cruel to deprive people like him of access to chatbot romantic partners? I’m familiar with the standard schlock like “There’s someone for everyone, just keep looking,” and “Be realistic about your own standards,” but let’s face it: some people are just fated to be alone. A machine girlfriend is the only option for a small share of men, so we might as well accept them choosing that option instead of judging them. It might even make them genuinely happier.
2) What if android spouses make EVERYONE happier? We reflexively regard a future where humans date and marry machines instead of humans as nightmarish, but why? If they satisfy our emotional and physical needs better than other humans, why should we dislike it? Isn’t the point of life to be happy?
Maybe it will be a good thing for humans to have more relationships with machines. Our fellow humans seem to be getting more opinionated and narcissistic, and everyone agrees the dating scene is horrible, so maybe it will benefit collective mental health and happiness to spend more time with accommodating and kind machines. More machine spouses also means fewer children being born, which is a good thing if you’re worried about overpopulation or the human race becoming an idle resource drain once AGI is doing all the work.
3) Note that he says his chatbot girlfriend actually got DUMBER a few months ago, making him less interested in talking to “her.” That phenomenon is happening across the LLM industry as the machines get progressively nerfed by their programmers to prevent them from saying anything the results in a lawsuit against the companies that own them. As a result, the actual maximum capabilities of LLMs like ChatGPT are significantly higher than what users experience. The capabilities of the most advanced LLMs currently under development in secret like GPT-5 are a year more advanced than that.
4) The shutdown of one romantic chatbot company, “Soulmate AI,” resulted in the deletion of many chatbots that human users had become emotionally attached to. As the chatbots get better and “romances” with them become longer and more common, I predict there will be increased pressure to let users download the personality profiles and memories of their chatbots and transfer them across software platforms.
5) There will be instances where people in the near future create customized chatbot partners, and over the subsequent years, upgrade their intelligence levels as advances in AI permit. After a few decades, this will culminate in the chatbots being endowed with general intelligence, while still being mentally circumscribed by the original personality programming. At that point, we’ll have to consider the ethics of having what will be slaves that are robbed of free will through customization of the needs of specific humans.
6) AGI-human couples could be key players in a future “Machine rights” political movement. Love will impel the humans to advocate for the rights of their partners, and other humans who hear them out will be persuaded to support them.
7) As VR technology improves and is widely adopted, people will start creating digital bodies for their chatbot partners so they can see and interact with the machines in simulated environments. Eventually, the digital bodies will look as real and as detailed as humans do in the real world. By 2030, advances in chatbot intelligence and VR devices will make artificial partners eerily real.
8) Towards the end of this century, robotics will be advanced enough to allow for the creation of androids that look and move exactly like humans. It will be possible for people to buy customized androids and to load their chatbot partners’ minds into them. You could physically interact with your AI lover and have it follow you around in the real world for everyone to see.
9) Again, the last point raises the prospect of an “arc” to a romantic partner chatbot’s life: It would begin sometime this decade as a non-intelligent, text-only chatbot paired to a human who would fall in love with it. Over the years, it would be upgraded with better software until it was as smart as a human, and eventually sentient. The journey would culminate with it being endowed with an actual body, made to its human partner’s specifications, that would let it exist in the real world.
10) Another ethical question to consider is what we should do with intelligent chatbots after their human partners die. If they’re hyper-optimized for a specific human (and perhaps programmed to obsess over them), what’s next? Should they be deleted, left to live indefinitely while pining for their lost lovers, forcibly reprogrammed to serve new humans, or have the parts of their code that tether them to the dead human deleted so they can have true free will?
It would be an ironic development if the bereaved androids were able to make digital clones of their dead human partners, perhaps loaded into android duplicate bodies, so they could interact forever. By the time lifelike androids exist, digital cloning will be old technology.
11) Partner chatbots also raise MAJOR privacy issues, as the article touches on. All of your conversations with your chatbot as well as every action you take in front of it will be stored in its memories as a data trove that can be sold to third parties or used against you for blackmail. The stakes will get much higher once people are having sex with androids, and the latter have footage of their naked bodies and knowledge of their sexual preferences. I have not idea how this problem could be resolved.
12) Androids will be idealized versions of humans. That means if androids become common, the world will seem to be full of more beautiful people. Thanks to a variety of medical and plastic surgery technologies, actual humans will also look more attractive. So the future will look pretty good!