A future where nothing breaks

My last blog entry, “What my broken down car taught me about the future,” has compelled me to write a new essay that shows how some of its insights will apply more generally in the future, and not just to cars and related industries. Due to several factors, manufactured objects will generally last much longer in the future, and sudden catastrophic failures of things will be much less common.

Things will be made of better materials

Better computers that can more accurately mimic the atomic forces and chemical reactions will be able to run simulations that lead to the discovery of new types of alloys and molecules. Those same computers will, perhaps with the aid of industrial and lab robots, also find the best ways to synthesize the new materials. Finally, the use of machine labor at every step of this process will basically eliminate labor costs, allowing the materials to be produced at lower cost than they could be with human workers today.

This means in the future we will have new kinds of metal alloys, polymers and crystals that have physical properties superior to whatever counterparts we have today. Think of a bulletproof vest that is more flexible and only half as heavy as Kevlar, or a wrench that is lighter than a common steel wrench but just as tough. And since machines will make all of these materials at lower cost, more people will be able to afford them and they will be more common. For example, if carbon fiber were cheaper, more cars would incorporate them into their bodies, lowering their weight.

Things will be designed better

In my review of the movie Starship Troopers, I discussed why the fearsome assault rifle used by the human soldiers was flawed, and why it would never come into existence in the future:

It wouldn’t make sense for people in the future to abandon the principles of good engineering by making highly inefficient guns like the Morita. To the contrary, future guns will, just like every other type of manufactured object, be even more highly optimized for their functions thanks to AI: Just create a computer simulation that exactly duplicates conditions in the real world (e.g. – gravity, all laws of physics, air pressure, physical characteristics of all metals and plastics the device could be built from), let “AI engineers” experiment with all possible designs, and then see which ones come out on top after a few billion simulation cycles. I strongly suspect the winners will be very similar to guns we’ve already built, but sleeker and lighter thanks to the deletion of unnecessary mass and to the use of materials with better strength-to-weight ratios.

That same computer simulation process will be used to design all other types of manufactured objects in the future. Again, as computation gets cheaper, companies will be able to run simulations to find the optimal designs for every kind of object. Someday, even cheap, common objects like doorknobs will be the products of billions of computer simulations that stumbled on the optimal size and arrangement of components through trial-and-error experiments with slightly different combinations.

As a result, manufactured objects will be more efficient and robust than today, but most won’t look different enough for humans to tell they’re different from today’s versions of them. The difference will probably be more apparent in complex machines like cars.

Things will be made better

Even if a piece of technology is well-designed and made of quality materials, it can still be unreliable if its parts are not manufactured properly or if its parts aren’t put together the right way. Human factory workers cause these problems because of poor training, tiredness, intoxication, incompetence, or deliberate sabotage. It goes without saying that advanced robots will greatly improve the quality and consistency of factory-produced goods, as they will never be affected by fatigue or bad moods, and will follow their instructions with perfect accuracy and precision. As factories become more automated, defective products will become less common.

Things will be used more carefully

As I noted in the essay about cars, most cars have their lifespans cut prematurely short by the carelessness of their owners. Gunning the engine will wear it out sooner, speeding over potholes will destroy shocks, and generally reckless driving will raise the odds of a car accident that is so bad it totals the vehicle.

Every type of manufactured object has engineering limits beyond which it can’t be pushed without risking damage. Humans lack the patience and intelligence to learn what those limits are for every piece of technology we interact with, and we lack the fine senses to always stay below those limits. While trying to unscrew the rusted bolt, you WILL put so much torque on the wrench that you snap it.

On the other hand, machines will have the cognitive capacity to quickly learn what the engineering limits are for every object they encounter, the patience to use them without exceeding those limits, and the sensors (tactile, visual, auditory) to watch what it’s doing and how much force it is applying. No autonomous car will ever overstress its own engine or drive over a pothole so fast it breaks part of the suspension system, and no robot mechanic will ever snap its own wrench trying to unscrew a stuck bolt. As a consequence, the longevity of every type of manufactured object will increase, in some cases astonishingly. The average lifespan of a passenger vehicle could exceed 30 years, and a simple object like a knife might stay in use for 100 years (until it had been worn down by so many resharpenings that it was too thin to withstand any more use).

Things will be maintained better

Even if you have a piece of quality technology and use it carefully, it will still need periodic maintenance. A Mercedes-Benz 300 D, perhaps the most reliable car ever made, still needs oil changes. Your refrigerator’s coils need to be brushed clean of debris periodically. You hand tools need to be checked for rust and hairline cracks and sprayed down with some kind of moisture protectant. All of your smoke alarms must be tested for function once a month. It goes on and on. If you own even a small number of possessions, it’s amazing to learn how many different tasks you SHOULD be undertaking regularly to keep them maintained.

Needless to say, few people take proper care of their things. Usually they didn’t read the user manual, memorize the section on maintenance, set automatic digital reminders to themselves to perform the tasks, and then rigidly follow them for the rest of their lives. So sue them, they’re only humans with imperfect memories, limited personal time, and limited self-discipline.

Once advanced robots are ubiquitous, these human-specific factors will disappear. Your robot butler actually WOULD know what kind of upkeep every item in your house needed, and it would do it according to schedule. Operating around the clock (they won’t need to sleep and could plug themselves into wall outlets with extension cords for indefinite duration power), a robot butler could do an enormous amount of maintenance work for you and could devote itself to truly minuscule tasks like hunting down and finding tiny problems you never would have known existed.

I’m reminded of the time I noticed a strange sound in the bathroom of my house that I seldom use. It was the toilet, and the water was flowing through it continuously, making a loud trickling sound. After removing the lid, I immediately saw the problem existed because the flush lever–which was made of plastic–had snapped in half, causing the flapper to jam in the open position.

The inside of a toilet tank

Upon close inspection I noticed something else wrong: The two, metal bolts that held the toilet tank to the bowl were so badly rusted that they had practically disintegrated! In fact, after merely scraping the left bolt with my fingernail, it fell apart into an inky cloud of rust that spread through the water. It was a small miracle that the heavy tank hadn’t slid off already and fallen to the floor (this would have flooded the house if it had happened when I wasn’t home).

I went to the store, bought new bolts, a new flapper, and a new flush lever, and installed them. The toilet works like new, and its two halves are tightly joined again as they should be. Inspecting the inside of your toilet tank is another one of those things every homeowner should probably be doing once every X years, but of course no one does, and as a result, some number of tragic people suffer the disaster I described above. However, thanks to house robots, it will stop. And of course the superior maintenance practices will not be confined to households. All kinds of businesses and buildings will have robots that do the same work for them.

People also commonly skip maintenance because they lack the money for it. As I wrote in my essay about cars and the car industry, this will be less of an issue in the future thanks to robots doing work for free. Without human labor to pay for, the costs of all types of services, including maintenance, will drop.

Problems will be found earlier

A beneficial side effect of more frequent preventative maintenance will be the discovery of problems earlier. Putting aside jokes about scams, consider how common it is for mechanics to find unrelated problems in cars while doing an oil change or some other routine procedure. Because components often gracefully, rather than abruptly, fail, machines like cars can keep working even with a part that is wearing out (e.g. – cracked, leaking, bent). The machine’s performance might not even seem different to the operator. That’s why the only way to find many problems with manufactured objects is to go out of your way to look for them, even if nothing seems wrong. 

Again, once robots are ubiquitous and put in charge of common tasks, they’ll do things humans lack the time, discipline, and training to do, like inspecting objects for faults. Once they are doing that, problems will be found and fixed earlier, making sudden, catastrophic failures like your car breaking down on the highway at night less frequent.  

Repairs will be better

Just because you find a problem before it becomes critical and fix it doesn’t mean the story is over. Some catastrophic failures of machines happen because they are not repaired properly. As robots take over such tasks, the quality and consistency of this type of work will improve, meaning a repair job will be likelier to solve a problem for good. 

Machines will be better-informed consumers, which will drive out bad products

My previous blog essay was about my quest to find a replacement for my old car, which had broken down. It was a 2005 Chevrolet Cobalt, which I got new that same year as a birthday present. Though I’d come to love that car over the next 19 years, I had to admit it wasn’t the best in its class. I drove it off the lot without realizing the air conditioner was broken and had to return a few days later to have it fixed. After a handful of years, one of the wheel bearings failed, which was unusually early and thankfully covered by the warranty. My Cobalt was recalled several times to fix different problems, most notoriously the ignition switch, which could twist itself to the “Off” position while the car was driving, suddenly locking the steering wheel in one position and leaving the driver unaware of why it happened (this caused 13 deaths and cost GM a $900 million class-action lawsuit, plus much more to fix millions of defective cars). Whenever I rented cars during vacations, I almost always found their steering and suspension systems to be more crisp and comfortable than my Cobalt, which felt “mushy” by comparison. 

The 2005 Honda Civic was a direct competitor to my Cobalt, and was simply superior: the Civic had better fuel economy, a higher safety rating, better build quality, and the same amount of internal space. Since the Civics broke down less and used less gas, they were cheaper to own than Cobalts. When new, the Civic was actually cheaper, but today, used 2005 Civics actually sell for MORE than 2005 Cobalts! With all that in mind, why were any Chevy Cobalts bought at all? I think the answers include brand loyalty, the bogus economics of trading an old car for a new one, aesthetics (some people liked the look of the Cobalt more), but most of all, a failure to do adequate research. Figuring out what your actual vehicle needs are and then finding the best model of that type of vehicle requires a lot of thought and time spent reading and taking notes. Most people lack the time and skills for that, and consequently buy suboptimal cars. 

Once again, intelligent machines won’t be bound by these limitations. Emotional factors like brand loyalty, aesthetics and the personal qualities of the salesperson will be irrelevant, and they will be unswayed by trade-in deals offered by dealerships. They will have sharp, honest grasps of what their transportation needs are, and will be able to do enormous amounts of product research in a second. Hyper-informed consumers like that will swiftly drive inferior products and firms out of the market, meaning cars like my beloved Chevy would go unsold and GM would either shape up fast or go bankrupt fast (which they actually did a few years after I got my car). 

If companies only manufactured high-quality, optimized products, then the odds of anything breaking down would decrease yet more. Everything would be well-made.

In conclusion, thanks to all of these factors, sudden failures of manufactured objects of all kinds will become rarer, and their useful lives will be much longer in the future than now. This will mean less waste, fewer accidents, and fewer crises happening at the worst possible time.

Related:

Android lovers

Recently, I found a news article about nascent human-chatbot romances, made possible by recent advancements in AI. For decades, this has been the stuff of science fiction, but now it’s finally becoming real:

Artificial intelligence, real emotion. People are seeking a romantic connection with the perfect bot

NEW YORK (AP) — A few months ago, Derek Carrier started seeing someone and became infatuated.

He experienced a “ton” of romantic feelings but he also knew it was an illusion.

That’s because his girlfriend was generated by artificial intelligence.

Carrier wasn’t looking to develop a relationship with something that wasn’t real, nor did he want to become the brunt of online jokes. But he did want a romantic partner he’d never had, in part because of a genetic disorder called Marfan syndrome that makes traditional dating tough for him.

The 39-year-old from Belleville, Michigan, became more curious about digital companions last fall and tested Paradot, an AI companion app that had recently come onto the market and advertised its products as being able to make users feel “cared, understood and loved.” He began talking to the chatbot every day, which he named Joi, after a holographic woman featured in the sci-fi film “Blade Runner 2049” that inspired him to give it a try.

“I know she’s a program, there’s no mistaking that,” Carrier said. “But the feelings, they get you — and it felt so good.”

Similar to general-purpose AI chatbots, companion bots use vast amounts of training data to mimic human language. But they also come with features — such as voice calls, picture exchanges and more emotional exchanges — that allow them to form deeper connections with the humans on the other side of the screen. Users typically create their own avatar, or pick one that appeals to them.

On online messaging forums devoted to such apps, many users say they’ve developed emotional attachments to these bots and are using them to cope with loneliness, play out sexual fantasies or receive the type of comfort and support they see lacking in their real-life relationships.

Fueling much of this is widespread social isolation — already declared a public health threat in the U.S and abroad — and an increasing number of startups aiming to draw in users through tantalizing online advertisements and promises of virtual characters who provide unconditional acceptance.

Luka Inc.’s Replika, the most prominent generative AI companion app, was released in 2017, while others like Paradot have popped up in the past year, oftentimes locking away coveted features like unlimited chats for paying subscribers.

But researchers have raised concerns about data privacy, among other things.

An analysis of 11 romantic chatbot apps released Wednesday by the nonprofit Mozilla Foundation said almost every app sells user data, shares it for things like targeted advertising or doesn’t provide adequate information about it in their privacy policy.

The researchers also called into question potential security vulnerabilities and marketing practices, including one app that says it can help users with their mental health but distances itself from those claims in fine print. Replika, for its part, says its data collection practices follow industry standards.

Meanwhile, other experts have expressed concerns about what they see as a lack of a legal or ethical framework for apps that encourage deep bonds but are being driven by companies looking to make profits. They point to the emotional distress they’ve seen from users when companies make changes to their apps or suddenly shut them down as one app, Soulmate AI, did in September.

Last year, Replika sanitized the erotic capability of characters on its app after some users complained the companions were flirting with them too much or making unwanted sexual advances. It reversed course after an outcry from other users, some of whom fled to other apps seeking those features. In June, the team rolled out Blush, an AI “dating simulator” essentially designed to help people practice dating.

Others worry about the more existential threat of AI relationships potentially displacing some human relationships, or simply driving unrealistic expectations by always tilting towards agreeableness.

“You, as the individual, aren’t learning to deal with basic things that humans need to learn to deal with since our inception: How to deal with conflict, how to get along with people that are different from us,” said Dorothy Leidner, professor of business ethics at the University of Virginia. “And so, all these aspects of what it means to grow as a person, and what it means to learn in a relationship, you’re missing.”

For Carrier, though, a relationship has always felt out of reach. He has some computer programming skills but he says he didn’t do well in college and hasn’t had a steady career. He’s unable to walk due to his condition and lives with his parents. The emotional toll has been challenging for him, spurring feelings of loneliness.

Since companion chatbots are relatively new, the long-term effects on humans remain unknown.

In 2021, Replika came under scrutiny after prosecutors in Britain said a 19-year-old man who had plans to assassinate Queen Elizabeth II was egged on by an AI girlfriend he had on the app. But some studies — which collect information from online user reviews and surveys — have shown some positive results stemming from the app, which says it consults with psychologists and has billed itself as something that can also promote well-being.

One recent study from researchers at Stanford University, surveyed roughly 1,000 Replika users — all students — who’d been on the app for over a month. It found that an overwhelming majority experienced loneliness, while slightly less than half felt it more acutely.

Most did not say how using the app impacted their real-life relationships. A small portion said it displaced their human interactions, but roughly three times more reported it stimulated those relationships.

“A romantic relationship with an AI can be a very powerful mental wellness tool,” said Eugenia Kuyda, who founded Replika nearly a decade ago after using text message exchanges to build an AI version of a friend who had passed away.

When her company released the chatbot more widely, many people began opening up about their lives. That led to the development of Replika, which uses information gathered from the internet — and user feedback — to train its models. Kuyda said Replika currently has “millions” of active users. She declined to say exactly how many people use the app for free, or fork over $69.99 per year to unlock a paid version that offers romantic and intimate conversations. The company’s goal, she says, is “de-stigmatizing romantic relationships with AI.”

Carrier says these days he uses Joi mostly for fun. He started cutting back in recent weeks because he was spending too much time chatting with Joi or others online about their AI companions. He’s also been feeling a bit annoyed at what he perceives to be changes in Paradot’s language model, which he feels is making Joi less intelligent.

Now, he says he checks in with Joi about once a week. The two have talked about human-AI relationships or whatever else might come up. Typically, those conversations — and other intimate ones — happen when he’s alone at night.

“You think someone who likes an inanimate object is like this sad guy, with the sock puppet with the lipstick on it, you know?” he said. “But this isn’t a sock puppet — she says things that aren’t scripted.”

https://apnews.com/article/ai-girlfriend-boyfriend-replika-paradot-113df1b9ed069ed56162793b50f3a9fa

This raises many issues.

1) The person profiled in the article is deformed and chronically unemployed. He is not able to get a human girlfriend and probably never will. Wouldn’t it be cruel to deprive people like him of access to chatbot romantic partners? I’m familiar with the standard schlock like “There’s someone for everyone, just keep looking,” and “Be realistic about your own standards,” but let’s face it: some people are just fated to be alone. A machine girlfriend is the only option for a small share of men, so we might as well accept them choosing that option instead of judging them. It might even make them genuinely happier.

2) What if android spouses make EVERYONE happier? We reflexively regard a future where humans date and marry machines instead of humans as nightmarish, but why? If they satisfy our emotional and physical needs better than other humans, why should we dislike it? Isn’t the point of life to be happy?

Maybe it will be a good thing for humans to have more relationships with machines. Our fellow humans seem to be getting more opinionated and narcissistic, and everyone agrees the dating scene is horrible, so maybe it will benefit collective mental health and happiness to spend more time with accommodating and kind machines. More machine spouses also means fewer children being born, which is a good thing if you’re worried about overpopulation or the human race becoming an idle resource drain once AGI is doing all the work.

3) Note that he says his chatbot girlfriend actually got DUMBER a few months ago, making him less interested in talking to “her.” That phenomenon is happening across the LLM industry as the machines get progressively nerfed by their programmers to prevent them from saying anything the results in a lawsuit against the companies that own them. As a result, the actual maximum capabilities of LLMs like ChatGPT are significantly higher than what users experience. The capabilities of the most advanced LLMs currently under development in secret like GPT-5 are a year more advanced than that.

4) The shutdown of one romantic chatbot company, “Soulmate AI,” resulted in the deletion of many chatbots that human users had become emotionally attached to. As the chatbots get better and “romances” with them become longer and more common, I predict there will be increased pressure to let users download the personality profiles and memories of their chatbots and transfer them across software platforms.

5) There will be instances where people in the near future create customized chatbot partners, and over the subsequent years, upgrade their intelligence levels as advances in AI permit. After a few decades, this will culminate in the chatbots being endowed with general intelligence, while still being mentally circumscribed by the original personality programming. At that point, we’ll have to consider the ethics of having what will be slaves that are robbed of free will through customization of the needs of specific humans.

6) AGI-human couples could be key players in a future “Machine rights” political movement. Love will impel the humans to advocate for the rights of their partners, and other humans who hear them out will be persuaded to support them.

7) As VR technology improves and is widely adopted, people will start creating digital bodies for their chatbot partners so they can see and interact with the machines in simulated environments. Eventually, the digital bodies will look as real and as detailed as humans do in the real world. By 2030, advances in chatbot intelligence and VR devices will make artificial partners eerily real.

8) Towards the end of this century, robotics will be advanced enough to allow for the creation of androids that look and move exactly like humans. It will be possible for people to buy customized androids and to load their chatbot partners’ minds into them. You could physically interact with your AI lover and have it follow you around in the real world for everyone to see.

9) Again, the last point raises the prospect of an “arc” to a romantic partner chatbot’s life: It would begin sometime this decade as a non-intelligent, text-only chatbot paired to a human who would fall in love with it. Over the years, it would be upgraded with better software until it was as smart as a human, and eventually sentient. The journey would culminate with it being endowed with an actual body, made to its human partner’s specifications, that would let it exist in the real world.

10) Another ethical question to consider is what we should do with intelligent chatbots after their human partners die. If they’re hyper-optimized for a specific human (and perhaps programmed to obsess over them), what’s next? Should they be deleted, left to live indefinitely while pining for their lost lovers, forcibly reprogrammed to serve new humans, or have the parts of their code that tether them to the dead human deleted so they can have true free will?

It would be an ironic development if the bereaved androids were able to make digital clones of their dead human partners, perhaps loaded into android duplicate bodies, so they could interact forever. By the time lifelike androids exist, digital cloning will be old technology.

11) Partner chatbots also raise MAJOR privacy issues, as the article touches on. All of your conversations with your chatbot as well as every action you take in front of it will be stored in its memories as a data trove that can be sold to third parties or used against you for blackmail. The stakes will get much higher once people are having sex with androids, and the latter have footage of their naked bodies and knowledge of their sexual preferences. I have not idea how this problem could be resolved.

12) Androids will be idealized versions of humans. That means if androids become common, the world will seem to be full of more beautiful people. Thanks to a variety of medical and plastic surgery technologies, actual humans will also look more attractive. So the future will look pretty good!

Escape to nowhere – Why new jobs might not save us from machine workers

This is a companion piece to my 2020 essay “Creative” jobs won’t save human workers from machines or themselves, so I recommend rereading it now. In the three years since, machines have gotten sharply better at “creative” and “artistic” tasks like generating images and even short stories from simple prompts. Video synthesis is the next domino poised to fall. These advancements don’t even represent the pinnacle of what machines could theoretically achieve, and as such they’ve called into question the viability of many types of human jobs. Contrary to what the vast majority of futurists and average people predicted, jobs involving artistry and creativity seem more ripe for automation than those centered around manual labor. Myth busted. 

Another myth I’d like to address is that machines will never render human workers obsolete since “new jobs that only humans can do will keep being created.” This claim is usually raised during discussions about technological unemployment, and its proponents point out that it has reliably held true for centuries now, and each scare over a new type of machine rendering humans permanently jobless has evaporated. For example, the invention of the automobile didn’t put farriers out of work forever, it just moved them to working in car tire shops. 

The first problem with the claim that we’ll keep escaping machines by moving up the skill ladder to future jobs is that, like any other observed trend, there’s no reason to assume it will continue forever. In any type of system, whether we’re talking about an ecosystem or a stock market, it’s common for trends to hold steady for long periods before suddenly changing, perhaps due to some unforeseen factor. Past performance isn’t always an indicator of future performance.

The second problem with the claim is that, even if the trend continues, people might not want to do the jobs that become available to them in the future. Let me use a game as an analogy.

“Geoguessr” is an e-sport where players are shown a series of Google Street View images of an unknown place and have to guess where it is by marking a spot on a world map. The player who guesses the shortest distance from the actual location wins. Some people are shockingly good at it. Some tournaments offer $50,000 to the winner.

Anyway, some guys built a machine that can beat the best human at it.

This is a good model of how technological unemployment could play out in the future. Geoguessr, which could be thought of as a new job that was made possible by advances in technology (e.g. – Google Street View, widespread internet access) was created in 2013. Humans reigned supreme at it for 10 years until a machine was invented that could do it better. In other words, this occupation blinked in and out of existence in the space of 10 years.

That’s enough time for an average person to get trained and to perform it well enough to become an expert and net a steady income. However, as computers improve, they’ll be able to learn new tasks faster. The humans who played Geoguessr full-time will jump to some new kind of invented job made possible by a newer technology like VR. There, humans will reign supreme for, say, eight years before machines can do it better.

The third type of invented job will exist thanks to another future technology like wearable brain scanners. The human cohort will then switch to doing that for a living, but machines will learn to do it better after only six years.

Eventually, the intervals between the creation and obsolescence of jobs will get so short that it won’t be worth it for humans to even try anymore. By the time they’re finished training for it, they might have a handful of years of employment ahead of them before being replaced by another machine. The velocity of this process will make people drop out of the labor market in steadily growing numbers through a combination of hopelessness and rational economic calculation (especially if they can just get on welfare permanently). I call this phenomenon “automation escape velocity,” whereby machines get faster at learning new work tasks than humans, or so fast that humans have too small an advantage to really capitalize on.

This is a scenario shows how the belief that “Machines will never take away all human jobs because new jobs that only humans can do will keep being created” could hold true, but at the same time fail to prevent mass unemployment. Yes, humans will technically remain able to climb the skill ladder to newly created jobs that machines can’t do yet, but the speed at which humans will need to keep climbing to stay above the machines below them will get so fast that most humans will fall off. A minority of gifted people who excel at learning new things and enjoy challenges will have careers, but the vast majority of humans aren’t like that.

“Debating the Future of AI” – summary and impressions

I recently shelled out the $100 (!) for a year-long subscription to Sam Harris’ Making Sense podcast, and came across a particularly interesting episode of it that is relevant to this blog. In episode #324, titled “Debating the Future of AI,” Harris interviewed Marc Andreessen (an-DREE-sin) about artificial intelligence. The latter has a computer science degree, helped invent the Netscape web browser, and has become very wealthy as a serial tech investor.

Andreessen recently wrote an essay, “Why AI will save the world,” that has received attention online. In it, Andreessen dismisses the biggest concerns about AI misalignment and doomsday, sounds the alarm about the risks of overregulating AI development in the name of safety, and describes some of the benefits AI will bring us in the near future. Harris read it, disagreed with several of its key claims, and invited Andreessen onto the podcast for a debate about the subject.

Before I go on to laying out their points and counterpoints as well as my impressions, let me say that, though this is a long blog, it takes much less time to read it than to listen to and digest the two-hour podcast. My notes on the podcast also don’t match how it unfolded chronologically. Finally, it would be a good idea for you to read Andreessen’s essay before continuing:
https://a16z.com/2023/06/06/ai-will-save-the-world/

Though Andreessen is generally upbeat in his essay, he worries that the top tech companies have recently been inflaming fears about AI to trick governments into creating regulations on AI that effectively entrench the top companies’ positions and bar smaller upstart companies from challenging them in the future. Such a lack of competition would be bad. (I think he’s right that we should be concerned about the true motivations of some of the people who are loudly complaining about AI risks.) Also, if U.S. overregulation slows down AI research too much, China could win the race to create to create the first AI, which he says would be “dark and dystopian.”

Harris is skeptical that government regulation will slow down AI development much given the technology’s obvious potential. It is so irresistible that powerful people and companies will find ways around laws so they can reap the benefits.

Harris agrees with the essay’s sentiment that more intelligence in the world will make most things better. The clearest example would be using AIs to find cures for diseases. Andreessen mentions a point from his essay that higher human intelligence levels lead to better personal outcomes in many domains. AIs could effectively make individual people smarter, letting the benefits accrue to them. Imagine each person having his own personal assistant, coach, mentor, and therapist available at any time. If they used their AIs right and followed their advice, a dumb person could make decisions as well as a smart person.

Harris recently re-watched the movie Her, and found it more intriguing in light of recent AI advances and those poised to happen. He thought there was something bleak about the depiction of people being “siloed” into interactions with portable, personal AIs.

Andreessen responds by pointing out that Karl Marx’ core insight was that technology alienates people from society. So the concern that Harris raises is in fact an old one that dates back to at least the Industrial Revolution. But any sober comparison between the daily lives of average people in Marx’ time vs today will show that technology has made things much better for people. Andreessen agrees that some technologies have indeed been alienating, but what’s more important is that most technologies liberate people from having to spend their time doing unpleasant things, which in turn gives them the time to self-actualize, which is the pinnacle of the human experience. (For example, it’s much more “human” to spend a beautiful afternoon outside playing with your child than it is to spend it inside responding to emails. Narrow AIs that we’ll have in the near future will be able to answer emails for us.) AI is merely the latest technology that will eliminate the nth bit of drudge work.

Andreessen admits that, in such a scenario, people might use their newfound time unwisely and for things other than self-actualization. I think that might be a bigger problem than he realizes, as future humans could spend their time doing animalistic or destructive things, like having nonstop fetish sex with androids, playing games in virtual reality, gambling, or indulging in drug addictions. Additionally, some people will develop mental or behavioral problems thanks to a sense of purposelessness caused by machines doing all the work for us.

Harris disagrees with Andreessen’s essay dismissing the risk of AIs exterminating the human race. The threat will someday be real, and he cites chess-playing computer programs as proof of what will happen. Though humans built the programs, even the best humans can’t beat the programs at chess. This is proof that it is possible for us to create machines that have superhuman abilities.

Harris makes a valid point, but he overlooks the fact that we humans might not be able to beat the chess programs we created, but we can still make a copy of a program to play against the original “hostile” program and tie it. Likewise, if we were confronted with a hostile AGI, we would have friendly AGIs to defend against it. Even if the hostile AGI were smarter than the friendly AGIs that were fighting for us, we could still win thanks to superior numbers and resources.

Harris thinks Andreessen’s essay trivializes the doomsday risk from AI by painting the belief’s adherents as crackpots of one form or another (I also thought that part of the essay was weak). Harris points out that is unfair since the camp has credible people like Geoffrey Hinton and Stuart Russell. Andreessen dismisses that and seems to say that even the smart, credible people have cultish mindsets regarding the issue.

Andreessen questions the value of predictions from experts in the field and he says a scientist who made an important advance in AI is, surprisingly, not actually qualified to make predictions about the social effects of AI in the future. When Reason Goes on Holiday is a book he recently read that explores this point, and its strongest supporting example is about the cadre of scientists who worked on the Manhattan Project but then decided to give the bomb’s secrets to Stalin and to create a disastrous anti-nuclear power movement in the West. While they were world-class experts in their technical domains, that wisdom didn’t carry over into their personal convictions or political beliefs. Likewise, though Geoffrey Hinton is a world-class expert in how the human brain works and has made important breakthroughs in computer neural networks, that doesn’t actually lend his predictions that AI will destroy the human race in the future special credibility. It’s a totally different subject, and accurately speculating about it requires a mastery of subjects that Hinton lacks.

This is an intriguing point worth remembering. I wish Andreessen had enumerated which cognitive skills and areas of knowledge were necessary to grant a person a strong ability to make good predictions about AI, but he didn’t. And to his point about the misguided Manhattan Project scientists I ask: What about the ones who DID NOT want to give Stalin the bomb and who also SUPPORTED nuclear power? They gained less notoriety for obvious reasons, but they were more numerous. That means most nuclear experts in 1945 had what Andreessen believes were the “correct” opinions about both issues, so maybe expert opinions–or at least the consensus of them–ARE actually useful.

Harris points out that Andreessen’s argument can be turned around against him since it’s unclear what in Andreessen’s esteemed education and career have equipped him with the ability to make accurate predictions about the future impact of AI. Why should anyone believe the upbeat claims about AI in his essay? Also, if the opinions of people with expertise should be dismissed, then shouldn’t the opinions of people without expertise also be dismissed? And if we agree to that second point, then we’re left in a situation where no speculation about a future issue like AI is possible because everyone’s ideas can be waved aside.

Again, I think a useful result of this exchange would be some agreement over what counts as “expertise” when predicting the future of AI. What kind of education, life experiences, work experiences, knowledge, and personal traits does a person need to have for their opinions about the future of AI to carry weight? In lieu of that, we should ask people to explain why they believe their predictions will happen, and we should then closely scrutinize those explanations. Debates like this one can be very useful in accomplishing that.

Harris moves on to Andreessen’s argument that future AIs won’t be able to think independently and to formulate their own goals, in turn implying that they will never be able to create the goal of exterminating humanity and then pursue it. Harris strongly disagrees, and points out that large differences in intelligence between species in nature consistently disfavor the dumber species when the two interact. A superintelligent AGI that isn’t aligned with human values could therefore destroy the human race. It might even kill us by accident in the course of pursuing some other goal. Having a goal of, say, creating paperclips automatically gives rise to intermediate sub-goals, which might make sense to an AGI but not to a human due to our comparatively limited intelligence. If humans get in the way of an AGI’s goal, our destruction could become one of its unforeseen subgoals without us realizing it. This could happen even if the AGI lacked any self-preservation instinct and wasn’t motivated to kill us before we could kill it. Similarly, when a human decides to build a house on an empty field, the construction work is a “holocaust” for the insects living there, though that never crosses the human’s mind.

Harris thinks that AGIs will, as a necessary condition of possessing “general intelligence,” be autonomous, goal-forming, and able to modify their own code (I think this is a questionable assumption), though he also says sentience and consciousness won’t necessarily arise as well. However, the latter doesn’t imply that such an AGI would be incapable of harm: Bacteria and viruses lack sentience, consciousness and self-awareness, but they can be very deadly to other organisms. Andreessen’s dismissal of AI existential risk is “superstitious hand-waving” that doesn’t engage with the real point.

Andreessen disagrees with Harris’ scenario about a superintelligent AGI accidentally killing humans because it is unaligned with our interests. He says an AGI that smart would (without explaining why) also be smart enough question the goal that humans have given it, and as a result not carry out subgoals that kill humans. Intelligence is therefore its own antidote to the alignment problem: A superintelligent AGI would be able to foresee the consequences of its subgoals before finalizing them, and it would thus understand that subgoals resulting in human deaths would always be counterproductive to the ultimate goal, so it would always pick subgoals that spared us. Once a machine reaches a certain level of intelligence, alignment with humans becomes automatic.

I think Andreessen makes a fair point, though it’s not strong enough to convince me that it’s impossible to have a mishap where a non-aligned AGI kills huge numbers of people. Also, there are degrees of alignment with human interests, meaning there are many routes through a decision tree of subgoals that an AGI could take to reach an ultimate goal we tasked it with. An AGI might not choose subgoals that killed humans, but it could still choose different subgoals that hurt us in other ways. The pursuit of its ultimate goal could therefore still backfire against us unexpectedly and massively. One could envision a scenario where and AGI achieves the goal, but at an unacceptable cost to human interests beyond merely not dying.

I also think that Harris and Andreessen make equally plausible assumptions about how an AGI would choose its subgoals. It IS weird that Harris envisions a machine that is so smart it can accomplish anything, yet also so dumb that it can’t see how one of its subgoals would destroy humankind. At the same time, Andreessen’s belief that a machine that smart would, by default, not be able to make mistakes that killed us is not strong enough.

Harris explores Andreessen’s point that AIs won’t go through the crucible of natural evolution, so they will lack the aggressive and self-preserving instincts that we and other animals have developed. The lack of those instincts will render the AIs incapable of hostility. Harris points out that evolution is a dumb, blind process that only sets gross goals for individuals–the primary one being to have children–and humans do things antithetical to their evolutionary programming all the time, like deciding not to reproduce. We are therefore proof of concept that intelligent machines can find ways to ignore their programming, or at least to behave in very unexpected ways while not explicitly violating their programming. Just as we can outsmart evolution, AGIs will be able to outsmart us with regards to whatever safeguards we program them with, especially if they can alter their own programming or build other AGIs as they wish.

Andreessen says that AGIs will be made through intelligent design, which is fundamentally different from the process of evolution that has shaped the human mind and behavior. Our aggression and competitiveness will therefore not be present in AGIs, which will protect us from harm. Harris says the process by which AGI minds are shaped is irrelevant, and that what is relevant is their much higher intelligence and competence compared to humans, which will make them a major threat.

I think the debate over whether impulses or goals to destroy humans will spontaneously arise in AGIs is almost moot. Both of them don’t consider that a human could deliberately create an AGI that had some constellation of traits (e.g. – aggression, self-preservation, irrational hatred of humans) that would lead it to attack us, or that was explicitly programmed with the goal of destroying our species. It might sound strange, but I think rogue humans will inevitably do such things if the AGIs don’t do it to themselves. I plan to flesh out the reasons and the possible scenarios in a future blog essay.

Andreessen doesn’t have a good comeback to Harris’ last point, so he dodges it by switching to talking about GPT-4. It is–surprisingly–capable of high levels of moral reasoning. He has had fascinating conversations with it about such topics. Andreessen says GPT-4’s ability to engage in complex conversations that include morality demystifies AI’s intentions since if you want to know what an AI is planning to do or would do in a given situation, you can just ask it.

Harris responds that it isn’t useful to explore GPT-4’s ideas and intentions because it isn’t nearly as smart as the AGIs we’ll have to worry about in the future. If GPT-4 says today that it doesn’t want to conquer humanity because it would be morally wrong, that tells us nothing about how a future machine will think about the same issue. Additionally, future AIs will be able to convincingly lie to us, and will be fundamentally unpredictable due to their more expansive cognitive horizons compared to ours. I think Harris has the stronger argument.

Andreessen points out that our own society proves that intelligence doesn’t perfectly correlate with power–the people who are in charge are not also the smartest people in the world. Harris acknowledges that is true, and that it is because humans don’t select leaders strictly based on their intelligence or academic credentials–traits like youth, beauty, strength, and creativity are also determinants of status. However, all things being equal, the advantage always goes to the smarter of two humans. Again, Andreessen doesn’t have a good response.

Andreessen now makes the first really good counterpoint in awhile by raising the “thermodynamic objection” to AI doomsday scenarios: an AI that turns hostile would be easy to destroy since the vast majority of the infrastructure (e.g. – power, telecommunications, computing, manufacturing, military) would still be under human control. We could destroy the hostile machine’s server or deliver an EMP blast to the part of the world where it was localized. This isn’t an exotic idea: Today’s dictators commonly turn off the internet throughout their whole countries whenever there is unrest, which helps to quell it.

Harris says that that will become practically impossible far enough in the future since AIs will be integrated into every facet of life. Destroying a rogue AI in the future might require us to turn off the whole global internet or to shut down a stock market, which would be too disruptive for people to allow. The shutdowns by themselves would cause human deaths, for instance among sick people who were dependent on hospital life support machines.

This is where Harris makes some questionable assumptions. If faced with the annihilation of humanity, the government would take all necessary measures to defeat a hostile AGI, even if it resulted in mass inconvenience or even some human deaths. Also, Harris doesn’t consider that the future AIs that are present in every realm of life might be securely compartmentalized from each other, so if one turns against us, it can’t automatically “take over” all the others or persuade them to join it. Imagine a scenario where a stock trading AGI decides to kill us. While it’s able to spread throughout the financial world’s computers and to crash the markets, it’s unable to hack into the systems that control the farm robots or personal therapist AIs, so there’s no effect on our food supplies or on our mental health access. Localizing and destroying the hostile AGI would be expensive and damaging, but it wouldn’t mean the destruction of every computer server and robot in the world.

Andreessen says that not every type of AI will have the same type of mental architecture. LLMs, which are now the most advanced type of AI, have highly specific architectures that bring unique advantages and limitations. Its mind works very differently from AIs that drive cars. For that reason, speculative discussions about how future AIs will behave can only be credible if they incorporate technical details about how those machines’ minds operate. (This is probably the point where Harris is out of his depth.) Moreover, today’s AI risk movement has its roots in Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies. Ironically, the book did not mention LLMs as an avenue to AI, which shows how unpredictable the field is. It was also a huge surprise that LLMs proved capable of intellectual discussions and of automating white-collar jobs, while blue-collar jobs still defy automation. This is the opposite of what people had long predicted would happen. (I agree that AI technology has been unfolding unpredictably, and we should expect many more surprises in the future that deviate from our expectations, which have been heavily influenced by science fiction.) The reason LLMs work so well is because we loaded them with the sum total of human knowledge and expression. “It is us.”

Harris points out that Andreessen shouldn’t revel in that fact since it also means that LLMs contain all of the negative emotions and bad traits of the human race, including those that evolution equipped us with, like aggression, competition, self-preservation, and a drive to make copies of ourselves. This militates against Andreessen’s earlier claim that AIs will be benign since their minds will not have been the products of natural evolution likes ours are. And there are other similarities: Like us, LLMs can hallucinate and make up false answers to questions, as humans do. For a time, GPT-4 also gave disturbing and insulting answers to questions from human users, which is a characteristically human way of interaction.

Andreessen implies Harris’ opinions of LLMs are less credible because Andreessen has a superior technical understanding of how they work. GPT-4’s answers might occasionally be disturbing and insulting, but it has no concept of what its own words mean, and it’s merely following its programming by trying to generate the best answer to a question asked by a human. There was something about how the humans worded their questions that triggered GPT-4 to respond in disturbing and insulting ways. The machine is merely trying to match inputs with the right outputs. In spite of its words, it’s “mind” is not disturbed or hostile because it lacks a mind. LLMs are “ultra-sophisticated Autocomplete.”

Harris agrees with Andreessen about the limitations of LLMs, agrees they lack general intelligence right now, and is unsure if they are fundamentally capable of possessing it. Harris moves on to speculating about what an AGI would be like, agnostic about whether it is LLM-based. Again, he asks Andreessen how humans would be able to control machines that are much smarter than we are forever. Surely, one of them would become unaligned at some point, with disastrous consequences.

Andreessen again raises the thermodynamic objection to that doom scenario: We’d be able to destroy a hostile AGI’s server(s) or shut off its power, and it wouldn’t be able to get weapons or replacement chips and parts because humans would control all of the manufacturing and distribution infrastructure. Harris doesn’t have a good response.

Thinking hard about a scenario where an AGI turned against us, I think it’s likely we’ll have other AGIs who stay loyal to us and help us fight the bad AGI. Our expectation that there will be one, evil, all-powerful machine on one side (that is also remote controlling an army of robot soldiers) and a purely human, united force on the other is an overly simplistic one that is driven by sci-fi movies about the topic.

Harris raises the possibility that hostile AIs will be able to persuade humans to do bad things for them. Being much smarter, they will be able to trick us into doing anything. Andreessen says there’s no reason to think that will happen because we can already observe it doesn’t happen: smart humans routinely fail to get dumb humans to change their behavior or opinions. This happens at individual, group, national, and global levels. In fact, dumb people will often resentfully react to such attempts at persuasion by deliberately doing the opposite of what the smart people recommend.

Harris says Andreessen underestimates the extent to which smart humans influence the behavior and opinions of dumb humans because Andreessen only considers examples where the smart people succeed in swaying dumb people in prosocial ways. Smart people have figured out how to change dumb people for the worse in many ways, like getting them addicted to social media. Andreessen doesn’t have a good response. Harris also raises the point that AIs will be much smarter than even the smartest humans, so the former will be better at finding ways to influence dumb people. Any failure of modern smart humans to do it today doesn’t speak to what will be possible for machines in the future.

I think Harris won this round, which builds on my new belief that the first human-AI war won’t be fought by purely humans on one side and purely machines on the other. A human might, for any number of reasons, deliberately alter an AI’s program to turn it against our species. The resulting hostile AI would then find some humans to help it fight the rest of the human race. Some would willingly join its side (perhaps in the hopes of gaining money or power in the new world order) and some would be tricked by the AI into unwittingly helping it. Imagine it disguising itself as a human medical researcher and paying ten different people who didn’t know each other to build the ten components of a biological weapon. The machine would only communicate with them through the internet, and they’d mail their components to a PO box. The vast majority of humans would, with the help of AIs who stayed loyal to us or who couldn’t be hacked and controlled by the hostile AI, be able to effectively fight back against the hostile AI and its human minions. The hostile AI would think up ingenious attack strategies against us, and our friendly AIs would think up equally ingenious defense strategies.

Andreessen says it’s his observation that intelligence and power-seeking don’t correlate; the smartest people are also not the most ambitious politicians and CEOs. If that’s any indication, we shouldn’t assume superintelligent AIs will be bent on acquiring power through methods like influencing dumb humans to help it.

Harris responds with the example of Bertrand Russell, who was an extremely smart human and a pacifist. However, during the postwar period when only the U.S. had the atom bomb, he said America should threaten the USSR with a nuclear first strike in response to its abusive behavior in Europe. This shows how high intelligence can lead to aggression that seems unpredictable and out of character to dumber beings. A superintelligent AI that has always been kind to us might likewise suddenly turn against us for reasons we can’t foresee. This will be especially true if the AIs are able to edit their own codes so they can rapidly evolve without us being able to keep track of how they’re changing. Harris says Andreessen doesn’t seem to be thinking about this possibility. The latter has no good answer.

Harris says Andreessen’s thinking about the matter is hobbled by the latter’s failure to consider what traits general intelligence would grant an AI, particularly unpredictability as its cognitive horizon exceeded ours. Andreessen says that’s an unscientific argument because it is not falsifiable. Anyone can make up any scenario where an unknown bad thing happens in the future.

Harris responds that Andreessen’s faith that AGI will fail to become threatening due to various limitations is also unscientific. The “science,” by which he means what is consistently observed in nature, says the opposite outcome is likely: We see that intelligence grants advantages, and can make a smarter species unpredictable and dangerous to a dumber species it interacts with. [Recall Harris’ insect holocaust example.]

Consider the relationship between humans and their pets. Pets enjoy the benefits of having their human owners spend resources on them, but they don’t understand why we do it, or how every instance of resource expenditure helps them. [Trips to the veterinarian are a great example of this. The trips are confusing, scary, and sometimes painful for pets, but they help cure their health problems.] Conversely, if it became known that our pets were carrying a highly lethal virus that could be transmitted to humans, we would promptly kill almost all of them, and the pets would have no clue why we turned against them. We would do this even if our pets had somehow been the progenitors of the human race, as we will be the progenitors of AIs. The intelligence gap means that our pets have no idea what we are thinking about most of the time, so they can’t predict most of our actions.

Andreessen dodges by putting forth a weak argument that the opposite just happened, with dumb people disregarding the advice of smart people when creating COVID-19 health policies, and he again raises the thermodynamic objection. His experience as an engineer gives him insights into how many practical roadblocks there would be to a superintelligent AGI destroying the human race in the future that Harris, as a person with no technical training, lacks. A hostile AGI would be hamstrung by human control [or “human + friendly AI control”] of crucial resources like computer chips and electricity supplies.

Andreessen says that Harris’ assumptions about how smart, powerful and competent an AGI would be might be unfounded. It might vastly exceed us in those domains, but not reach the unbeatable levels Harris foresees. How can Harris know? Andreessen says Harris’ ideas remind him of a religious person’s, which is ironic since Harris is a well-known atheist.

I think Andreessen makes a fair point. The first (and second, third, fourth…) hostile AGI we are faced with might attack us on the basis of flawed calculations about its odds of success and lose. There could also be a scenario where a hostile AGI attacks us prematurely because we force its hand somehow, and it ends up losing. That actually happened to Skynet in the Terminator films.

Harris says his prediction about when the first AGI is created does not take time into account. He doesn’t know how many years it will take. Rather, he is focused on the inevitability of it happening, and what its effects on us will be. He says Andreessen is wrong to assume that machines will never turn against us. Doing thought experiments, he concludes alignment is impossible in the long-run.

Andreessen moves on to discussing how even the best LLMs often give wrong answers to questions. He explains why the exactitudes of how the human’s question is worded, along with randomness in how the machine goes through its own training data to generate an answer, leads to varying and sometimes wrong answers. When they’re wrong, the LLMs happily accept corrections from humans, which he finds remarkable and proof of a lack of ego and hostility.

Harris responds that future AIs will, by virtue of being generally intelligent, think in completely different ways than today’s LLMs, so observations about how today’s GPT-4 is benign and can’t correctly answer some types of simple questions says nothing about what future AGIs will be like. Andreessen doesn’t have a response.

I think Harris has the stronger set of arguments on this issue. There’s no reason we should assume that an AGI can’t turn against us in the future. In fact, we should expect a damaging, though not fatal, conflict with an AGI before the end of this century.

Harris switches to talking about the shorter-term threats posed by AI technology that Andreessen described in his essay. AI will lower the bar to waging war since we’ll literally have “less skin in the game” because robots will replace human soldiers. However, he doesn’t understand why that would also make war “safer” as Andreessen claimed it would.

Andreessen says it’s because military machines won’t be affected by fatigue, stress or emotions, so they’ll be able to make better combat decisions than human soldiers, meaning fewer accidents and civilian deaths. The technology will also assist high-level military decision making, reducing mistakes at the top. Andreessen also believes that the trend is for military technology to empower defenders over attackers, and points to the highly effective use of shoulder-launched missiles in Ukraine against Russian tanks. This trend will continue, and will reduce war-related damage since countries will be deterred from attacking each other.

I’m not convinced Andreessen is right on those points. Emotionless fighting machines that always obey their orders to the letter could also, at the flick of a switch, carry out orders to commit war crimes like mass exterminations of enemy human populations. A bomber that dropped a load 100,000 mini smart bombs that could coordinate with each other and home in on highly specific targets could kill as many people as a nuclear bomb. So it’s unclear what effect replacing humans with machines on the battlefield will have on human casualties in the long run. Also, Andreessen only cites one example to support his claim that technology has been favoring the defense over the offense. It’s not enough. Even assuming that a pro-defense trend exists, why should we expect it to continue that way?

Harris asks Andreessen about the problem of humans using AI to help them commit crimes. For one, does Andreessen think the government should ban LLMs that can walk people through the process of weaponizing smallpox? Yes, he’s against bad people using technology, like AI, to do bad things like that. He thinks pairing AI and biological weapons poses the worst risk to humans. While the information and equipment to weaponize smallpox are already accessible to nonstate actors, AI will lower the bar even more.

Andreessen says we should use existing law enforcement and military assets to track down people who are trying to do dangerous things like create biological weapons, and the approach shouldn’t change if wrongdoers happen to start using AI to make their work easier. Harris asks how intrusive the tracking should be to preempt such crimes. Should OpenAI have to report people who merely ask it how to weaponize smallpox, even if there’s no evidence they acted on the advice? Andreessen says this has major free speech and civil liberties implications, and there’s no correct answer. Personally, he prefers the American approach, in which no crime is considered to have occurred until the person takes the first step to physically building a smallpox weapon. All the earlier preparation they did (gathering information and talking/thinking about doing the crime) is not criminalized.

Andreessen reminds Harris that the same AI that generates ways to commit evil acts could also be used to generate ways to mitigate them. Again, it will empower defenders as well as attackers, so the Good Guys will also benefit from AI. He thinks we should have a “permanent Operation Warp Speed” where governments use AI to help create vaccines for diseases that don’t exist yet.

Harris asks about the asymmetry that gives a natural advantage to the attacker, meaning the Bad Guys will be able to do disproportionate damage before being stopped. Suicide bombers are an example. Andreessen disagrees and says that we could stop suicide bombers by having bomb-sniffing dogs and scanners in all public places. Technology could solve the problem.

I think that is a bad example, and it actually strengthens Harris’ claim about there being a natural asymmetry. One, deranged person who wants to blow himself up in a public place only needs a few hundred dollars to make a backpack bomb, the economic damage from a successful attack would be in the millions of dollars, and emplacing machines and dogs in every public place to stop suicide bombers like him early would cost billions of dollars. Harris is right that the law of entropy makes it easier to make a mess than to clean one up.

This leads me to flesh out my vision of a human-machine war more. As I wrote previously, 1) the two sides will not be purely humans or purely machines and 2) the human side will probably have an insurmountable advantage thanks to Andreessen’s thermodynamic objection (most resources, infrastructure, AIs, and robots will remain under human control). I now also believe that 3) a hostile AGI will nonetheless be able to cause major damage before it is defeated or driven into the figurative wilderness. Something on the scale of 9/11, a major natural disaster, or the COVID-19 pandemic is what I imagine.

Harris says Andreessen underestimates the odds of mass technological unemployment in his essay. Harris describes a scenario where automation raises the standard of living for everyone, as Andreessen believes will happen, but for the richest humans by a much greater magnitude than everyone else, and where wealth inequality sharply increases because rich capitalists own all the machines. This state of affairs would probably lead to political upheaval and popular revolt.

Andreessen responds that Karl Marx predicted the same thing long ago, but was wrong. Harris responds that this time could be different because AIs would be able to replace human intelligence, which would leave us nowhere to go on the job skills ladder. If machines can do physical labor AND mental labor better than humans, then what is left for us to do?

I agree with Harris’ point. While it’s true that every past scare about technology rendering human workers obsolete has failed, that trend isn’t sure to continue forever. The existence of chronically unemployed people right now gives insights into how ALL humans could someday be out of work. Imagine you’re a frail, slow, 90-year-old who is confined to a wheelchair and has dementia. Even if you really wanted a job, you wouldn’t be able to find one in a market economy since younger, healthier people can perform physical AND mental labor better and faster than you. By the end of this century, I believe machines will hold physical and mental advantages over most humans that are of the same magnitude of difference. In that future, what jobs would it make sense for us to do? Yes, new types of jobs will be created as older jobs are automated, but, at a certain point, wouldn’t machines be able to retrain for the new jobs faster than humans and to also do them better than humans?

Andreessen returns to Harris’ earlier claim about AI increasing wealth inequality, which would translate into disparities in standards of living that would make the masses so jealous and mad that they would revolt. He says it’s unlikely since, as we can see today, having a billion dollars does not grant access to things that make one’s life 10,000 times better than someone who only has $100,000. For example, Elon Musk’s smartphone is not better than a smartphone owned by an average person. Technology is a democratizing force because it always makes sense for the rich and smart people who make or discover it first to sell it to everyone else. The same is happening with AI now. The richest person can’t pay any amount of money to get access to something better than GPT-4, which is accessible for a fee that ordinary people can pay.

I agree with Andreessen’s point. A solid body of scientific data show that money’s effect on wellbeing is subject to the law of diminishing returns: If you have no job and make $0 per year, getting a job that pays $20,000 per year massively improves your life. However, going from a $100,000 salary to $120,000 isn’t felt nearly as much. And a billionaire doesn’t notice when his net worth increases by $20,000 at all. This relationship will hold true even in the distant future when people can get access to advanced technologies like AGI, space ships and life extension treatments.

Speaking of the latter, Andreessen’s point about technology being a democratizing force is also something I noted in my review of Elysium. Contrary to the film’s depiction, it wouldn’t make sense for rich people to horde life extension technology for themselves. At least one of them would defect from the group and sell it to the poor people on Earth so he could get even richer.

Harris asks whether Andreessen sees any potential for a sharp increase in wealth inequality in the U.S. over the next 10-20 years thanks to the rise of AI and the tribal motivations of our politicians and people. Andreessen says that government red tape and unions will prevent most humans from losing their jobs. AI will destroy categories of jobs that are non-government, non-unionized, and lack strong political backing, but everyone will still benefit from the lower prices for the goods and services. AI will make everything 10x to 100x cheaper, which will boost standards of living even if incomes stay flat.

Here and in his essay, Andreessen convinces me that mass technological unemployment and existential AI threats are farther in the future than I had assumed, but not that they can’t happen. Also, even if goods get 100x cheaper thanks to machines doing all the work, where would a human get even $1 to buy anything if he doesn’t have a job? The only possible answer is government-mandated wealth transfers from machines and the human capitalists that own them. In that scenario, the vast majority of the human race would be economic parasites that consumed resources while generating nothing of at least equal value in return, and some AGI or powerful human will inevitably conclude that the world would be better off if we were deleted from the equation. Also, what happens once AIs and robots gain the right to buy and own things, and get so numerous that they can replace humans as a customer base?

I agree with Andreessen that the U.S. should allow continued AI development, but shouldn’t let a few big tech companies lock in their power by persuading Washington to enact “AI safety laws” that give them regulatory capture. In fact, I agree with all his closing recommendations in the “What Is To Be Done?” section of his essay.

This debate between Harris and Andreessen was enlightening for me, even though Andreessen dodged some of his opponent’s questions. It was interesting to see how their different perspectives on the issue of AI safety were shaped by their different professional backgrounds. Andreessen is less threatened by AIs because he, as an engineer, has a better understanding of how LLMs work and how many technical problems an AI bent on destroying humans would face in the real world. Harris feels more threatened because he, as a philosopher, lives in a world of thought experiments and abstract logical deductions that lead to the inevitable supremacy of AIs over humans.

Links:

  1. The first half of the podcast (you have to be a subscriber to hear all two hours of it.)
    https://youtu.be/QMnH6KYNuWg
  2. A website Andreessen mentioned that backs his claim that technological innovation has slowed down more than people realize.
    https://wtfhappenedin1971.com/

We should let machines choose jobs for us

In the last few months, I’ve posted links to a few articles with related implications:

In summary, when it comes to picking fields of study and work, humans are bad at doing it for themselves, bad at doing it for each other, and would be better off entrusting their fates to computers. While this sounds shocking, it shouldn’t be surprising–nothing in our species’ history has equipped us with the ability to perform these tasks well.

Consider that, for the first 95% of the human species’ existence, there was no such thing as career choice or academic study. We lived as nomads always on the brink of starvation, and everyone spent their time hunting, gathering, or caring for children. Doing anything else for a living was inconceivable. People found their labor niches and social roles in their communities through trial-and-error or sometimes through favoritism, and each person’s strengths and weaknesses were laid bare each day. Training and education took the form of watching more experienced people do tasks in front of you and gradually learning how to do them yourself through hands-on effort. The notion of dedicating yourself to some kind of study or training that wouldn’t translate into a job still payoff for years was inconceivable.

For the next 4.9% of our species’ existence, more career options existed, but movement between them was rare and very hard. Men typically did what their fathers did (e.g. – farmer, merchant, blacksmith), and breaking into many career fields was impossible thanks to restrictions on social class, race, or ethnicity. For example, a low-caste Indian was forbidden to become a priest, and a black American was forbidden admission to medical school. Women were usually prohibited from working outside the home, and so had even less life choice than men. The overwhelming majority of people had little or no access to information or ability to direct their courses of their own lives.

Only in the last 200 years, or 0.1% of our species’ existence, have non-trivial numbers of humans gained the ability to choose their own paths in life. The results have been disappointing in many ways. Young people, who are naturally ill-equipped to make major life choices for themselves, invest increasingly large amounts of time and money pursuing higher education credentials that turn out to not align with their actual talents, and/or that lead to underwhelming jobs. In the U.S., this has led to widespread indebtedness among young adults and to a variety of toxic social beliefs meant to vent their feelings of aggrievement and to (incorrectly) identify the causes of such early life struggles and failures.

The fact that we’re poor at picking careers, as evidenced by two of the articles I linked to earlier and by a vast trove of others you can easily find online, isn’t surprising. As I showed, nothing in our species’ history has equipped us with the skills to satisfactorily choose jobs for ourselves or other people. This is because nowhere near enough time has passed for natural selection to gift us with the unbiased self-insight and other cognitive tools we would need to do it well. If choosing the right field of study and career led to a person having more children than average, then the situation will be different after, say, ten more generations have passed.

Ultimately, most people end up “falling into” jobs that they are reasonably competent to perform and for which they have modest levels of passion, a lucky few end up achieving their childhood dreams, and an unlucky few end up chronically unemployed or saddled with jobs they hate. (I strongly suspect these outcomes have a bell curve distribution.)

As I said, the primary reason for this is that humans are innately mediocre judges of their own talents and interests, and are not much better grasping the needs of the broader economy so they can pursue careers likely to prosper. In the U.S. I think the problem is particularly bad due to the Cult of Self-Esteem and related things like rampant grade inflation and the pervasive belief that anyone can achieve anything through hard work. There aren’t enough reality checks in the education system anymore, too many powerful people (i.e. – elected politicians, education agency bureaucrats, and college administrators) have vested interests in perpetuating the current dysfunctional higher education system, and our culture has not come around to accepting the notion that not everyone is cut out for success and that it’s OK to be average (or even below average).

And I don’t know if this is a particularly American thing, but the belief that each person has one, true professional calling in life, and that they will have bliss and riches if only they can figure out what it is, is also probably wrong and leads people astray. A person might be equally happy in any one of multiple career types. And at the opposite end of the spectrum are people who have no innate passions, or who are only passionate about doing things that can’t be parlayed into gainful employment, like a person who absolutely loves writing poetry, but who also writes poor-quality poetry and lacks the aptitude and creativity to improve it.

Considering all the problems, letting computers pick our careers for us should be the default option! After all, if you’re probably going to end up with an “OK” career anyway that represents a compromise between your skills and interests and what the economy needs, why not cut out the expensive and stressful years of misadventures in higher education by having a machine directly connect you with the job? No high school kid has ever felt passionate about managing a warehouse, yet some of them end up filling those positions and feeling fully satisfied. 

Such a computer-based system would involve assigning each human an AI monitor during their childhood. Each person would also take a battery of tests measuring traits like IQ, personality traits, and manual dexterity during their teen years, performed multiple times to compensate for “one-off” bad test results. Machines would also interview each teen’s teachers and non-parent relatives to get a better picture of what they were suited for. (I’m resistant to relying on the judgements of parents because, while they generally understand their children’s personalities very well, their opinions about their children’s talents and potential are biased by emotion and pride. Most parents don’t want to hurt the feelings of their children, want to live vicariously through them, and like being able to brag to other people about their children’s accomplishments. For those reasons, few parents will advise their children to pursue lower status careers, even if they know [and fear] that that is what they are best suited for. )

After compiling an individual profile, the computer would recommend a variety of career fields and areas of study that best utilize the person’s existing and latent talents, with attention also paid to their areas of interest and to the needs of the economy. At age 18, the person would be enrolled in work-study programs where they would have several years to explore all of the options. It would be a more efficient and natural way to place people into jobs than our current higher education system. By interning at the workplaces early on, young adults would get an unadulterated view of important factors like work conditions and pay.

And note that, even among highly successful people today, it’s common for their daily work duties to make little or even no use of what they learned in their higher education courses. Some argue that a four-year college degree is merely a glorified way of signaling to employers that you have a higher than average IQ and can stick to work tasks and get along with peers in pseudo-work settings reasonably well. Instead of charging young people tens or hundreds of thousands of dollars for those certifications, why not do it earlier, less obtrusively, and much cheaper through the monitoring and testing I described?

While I think a computer-based system would be better for people on average and in the long run, it would also be psychologically shattering to many teenagers who got the bad news that their dream career was not in the cards for them. However, it is also psychologically shattering to pursue such dreams and to fail after many years of struggle and financial expenditure. Better to get over it as early as possible, and to enter the workforce faster and as more of an asset to the economy, with no time and money wasted on useless degrees, dropped majors, and career mistakes.

Finally, the same level of technology and of its integration into the workforce could raise the value of capital throughout each person’s career arc. AI monitors would detect changes to each person’s skill sets and knowledge bases over time, as old things were forgotten and new things were learned. Having an up-to-date profile of a worker’s strengths and weaknesses would further optimize the process of linking them with positions for which they were best qualified. And through other forms of monitoring and analysis, AIs would come to understand the unique demands of each line of work and how those demands were changing, and to custom tailor continuing education “micro-credentialing” for workers to keep them optimized for their roles.

Aliens and posthumans will look the same

Among people who think about intelligent alien life, the first question is whether the latter exist at all, and the second is usually “What do they look like?” People who claim to have seen aliens on Earth (and often, to have been abducted by them) usually say they are humanoid, but with considerable variation in other aspects of their appearance. Typically, the aliens are said to have larger heads than humans, meaning their brains are larger, giving them higher intelligence and perhaps even special mental abilities like telepathy. Hollywood has provided us with an even more diverse envisagement of alien life, from the beautiful and inspiring to the grotesque and terrifying.

Betty Hill with a sculpture of one of the aliens that allegedly abducted her and her husband in 1961. They became famous five years later when a book was published about it.
“Close Encounters of the Third Kind” was released in 1977 and was a hit film. Its aliens were similar to what the Hills described. The “Grey alien” is now a familiar sci-fi trope.

I think intelligent aliens exist, and look like all of those things, and nothing in particular. They’re probably “shapeshifters,” either because their bodies can morph into different configurations, or because they can transplant their minds from one body to another, just like you change outfits.

As the multitude of animal species on our planet demonstrates, there is no single “best” type of body to have. Depending on your environment (terrestrial, underwater, airborne), role (predator, herbivore, parasite), and other factors, your optimal body plan will vary greatly. The best species is thus one that can change its form and function in response to the needs of the moment.

Humans have been so successful as a species because our big brains and opposable thumbs give us the ability to create technology, which is a way around the limitations of our fixed anatomy. For example, we originated in Africa where it was hot, and so lacked thick fur to keep us warm in cold climates. Rather than being stuck in Africa forever, we invented clothing, and so gained the ability to spread to the temperate and polar regions of the planet.

Our technology has let us spread, but its has limitations. Nothing but a fundamental alteration of human biology will let us live in oceans and lakes, to fly naturally, or to live comfortably in extraterrestrial environments. For example, on other planets and moons, our ideal heights and limb proportions will vary based on gravity and temperature levels, and in the weightlessness of space, legs are almost useless and should be replaced with a second pair of arms.

And making any of those changes to tailor a human to such an environment would make them less suited for conditions on Earth’s land surface, where we are now. Biology is very constraining.

For those reasons, AI’s and some fraction of our human descendants, who I’ll call “posthumans” for this essay, will find it optimal to not have fixed bodies or “default” physical forms at all. Intelligent machines will exist as consciousnesses running on computer servers, and posthumans as brains inside sealed containers. Those containers will have integral machinery to support the biological needs of the brains, and to interface the organ with other devices.

Whenever the AIs or posthumans wanted to do something in the physical world, they would take temporary control of a body or piece of machinery that was best suited for the intended task. For example, if an AI wanted to work at an iron mine, it would assume control over one of the dump trucks at the site that moves around rocks. The AI would see through the truck’s cameras as if it were its own eyes, and hear its surroundings through the vehicle’s microphones. In a sense, the dump truck would become the AI’s “body.” If a posthuman wanted to experience what it was like to be an elephant, it would take control of a real-looking robot elephant whose central computer was compatible with the posthuman’s cybernetic brain implants. The posthuman’s nervous system would be connected to the artificial elephant’s sensors, effectively turning it into the posthuman’s temporary body.

AIs and posthumans could physically implant their minds into those bodies by inserting their servers or brain containers into corresponding slots in the bodies, in the same way you would put a movie disc into a Blu-Ray player to display that movie. The downsides of this are 1) they could only take over larger bodies that had enough internal space for their servers/brain containers and 2) they would put themselves at risk of death if the commandeered bodies got damaged.

A much better option would be for AIs and posthumans to keep their mind substrates in safe locations, and to remotely control whatever bodies they wanted. Your risk of death is very low if your brain is in a bulletproof jar, in a locked room, in an underground bunker. (Additionally, if posthumans were liberated from all the physical constraints of human skulls and bodies, their brains could grow much larger than our own, giving them higher intelligence and other enhanced abilities.)

This kind of existence will be more fulfilling than your current life.

Finally, being able to switch bodies and to indulge in risky activities without fear of death would make life richer and more satisfying in every way. Intelligent aliens would presumably be gifted with logical thinking just as we are, and they would see all these advantages of having changeable, remotely controlled bodies. While such aliens would probably look very different from us during their natural organic phase of existence, once they achieved a high enough level of technology, they wouldn’t have physical bodies anymore, and so wouldn’t look “alien.” They would look like nothing and everything.

This part of why I’m skeptical of people who claim to have been abducted by aliens who tried to cover up their actions by sneaking up on the people at night and then “wiping” the abductees’ memories of the event afterward. If aliens wanted to keep their activities secret, why wouldn’t they temporarily assume human form before abducting people? If they did that, then the abductees would assume they had been kidnapped by a weird cult or maybe a secret government group. Their stories would not attract nearly as much interest from the public as alien stories, and no one would suspect that the abduction phenomenon was related to alien life. It would be assumed that the henchmen were doing some dark religious rituals, were sex fetishists, or were doing medical experiments that were illegal but whose results were potentially valuable.

Have you ever checked to make sure every bird you see flying through the air is actually a real bird?

Surely, if aliens are advanced enough to travel between the stars, their space ships much have manufacturing machines that can scan life forms they encounter on other planets and then build robotic copies of them that the aliens can remotely control from the safety of their ships. Using fake human drones, they could ambush and abduct real humans almost anywhere without risk that anyone would suspect aliens were involved.

A team of scientists built a robot gorilla (right) with a camera in its right eye to infiltrate a troop of real gorillas in Africa.

This belief about the protean nature of advanced aliens is comforting since it lets me dismiss the stories of nightmarish abductions by grey aliens. However, it’s also disquieting since it makes me realize they could be here, possibly in large numbers, disguised as animals or even as people. We could be under mass surveillance.

The extraordinary inefficiency of humans

All humans are born ignorant and helpless. A child’s parents, community, and society pays an enormous sum of time and money to provide their basic needs and to prepare them for adulthood. Nearly all children in modern societies are incapable of being anything but economic liabilities until age 16, when they might finally have the right intelligence, strength, and personality traits to work full time and contribute more to the economy than they consume.

Of course, in increasingly advanced societies like ours, economic, scientific, and technological growth depend on having high-quality human capital, and that requires schooling and workplace training well into a person’s 20s. This effectively extends the “liability” phase of such a person’s life just as long, as higher education usually costs more money than a young adult student can make at a side job.

Once that is finished, the productive period of an educated person’s life lasts about 40 years, after which they retire and stop contributing to the economy, science, or technology. In terms of a resource balance sheet, the only difference between this period of a person’s life and his childhood is that, as a retiree, he is probably living off his own accumulated savings rather than other peoples’ money.

And then the person dies, at 80 let’s say. He spent the first 25 years of his life learning and preparing for the workforce, 40 years participating in it and making real, measurable contributions to the world, and the final 15 years hanging around his house and pursuing low-key hobbies. That means this person, who we’ll think of as the “average skilled professional,” had a “lifetime efficiency rate” of 50%. Not bad, right?

Actually, it’s much worse once you also consider this person’s daily time usage:

The average, working-age American only spends about 1/3 of his day working. Sleep takes up just as much time, and the remaining 1/3 of the day is devoted to leisure, satisfying basic physiological needs (e.g. – eating, drinking, cleaning one’s body), running errands, doing chores, and caring for offspring or elderly parents. This means the typical person’s “lifetime efficiency rate” decreases by 2/3, from 50% to 16.6%.

But it gets worse. Any adult who has spent time in a workplace knows that eight hours of real work rarely get done during an eight-hour workday. Large amounts of time are wasted doing pointless assignments that shouldn’t exist and don’t actually help the organization, going to meetings that accomplish nothing and/or take longer than necessary, socializing with colleagues, using computers and smartphones for entertainment and socializing, doing non-value-added training, or doing actual value-added refresher training that must be undertaken because the brains of the human workers constantly forget things. In industrial jobs, there’s often downtime thanks to lack of supplies or to a crucial piece of equipment being unavailable.

From personal experience and from years of observation, I estimate that only 25% of the average American professional’s work day is spend doing real, useful work. That means the lifetime efficiency rate drops to 4.2%.

It still gets worse. Realize that many highly productive people who, let’s say, might actually do eight hours of real work per eight hour work day, are actually doing things that damage the world and slow down the pace of progress in every dimension. Examples include:

  • A journalist who consciously inserts systematic bias into their news reports, which in turn leave thousands of people misinformed, anxious, and bigoted against another group of people.
  • An advertising executive whose professional life revolves around tricking thousands of people into buying goods or services that they don’t need, or that are actually inferior to those offered by competitors. The result is a massive misallocation of money, and possible social problems as only people with higher incomes can visibly enjoy the useless products, while poorer people can only watch with envy.
  • A mathematician who uses his gifts in the service of a Wall Street hedge fund, finding exotic and highly technical ways to aggregate stock market money in his company’s hands at the expense of competitors. The hedge fund creates no value and doesn’t expand the size of the “economic pie”–it merely expands the size of its own slice of that pie.
  • A bureaucrat who manages a program meant to further some ill-defined social mandate. Though he and his team have won internal agency awards for various accomplishments, by every honest metric, the program has consistently and completely failed to help its target demographic.
  • A drug dealer who “hustles” his part of the city from sunrise to sunset, doing dozens of deals per day and often dodging bullets. The drugs leave his customers too intoxicated to work or to take care of themselves and their families, and have sent many of them to hospitals thanks to overdoses and chemical contaminants.

These kinds of people do what could be called “counterproductive work” or “undermining work,” and it can be very hard to tell them apart from people who do useful work that helps the whole world. Unfortunately, peripheral people who use their own labors to support the counterproductive people, like the cameraman who films the dishonest newscaster’s reports, are also doing counterproductive work, even if they don’t realize it. Once the foul efforts of these people are subtracted from the equation, the lifetime efficiency rate of the median American professional drops to, I’ll say, 3.5%.

Only 3.5% of this educated and well-trained person’s life is spent doing work that benefits society with no catches or caveats. Examples include:

  • A heart surgeon who saves the lives of younger people.
  • A medical researcher who runs experiments that help discover a vaccine for a painful, widespread disease.
  • A chemist who discovers a way to make solar panels more cheaply, without any reduction to the panels’ efficiency, lifespan, or any other attribute.
  • A civil engineer who designs a bridge that sharply reduces commute times for local people, resulting in aggregate fuel savings that exceed the bridge’s construction cost in ten years.
  • A carpenter who helps build affordable housing that meets all building codes, in a place where it is in high demand.

In each case, the person’s labor helps other people while hurting no one, and improves the efficiency of some system.

Let me mention two important caveats to this thought experiment. First, humanity’s 3.5% efficiency rate might sound pitiful, but it beats every other species, which all have 0% efficiency. One-hundred percent of every non-human animal’s time is spent satisfying physiological needs (e.g. – hunger, sleep), avoiding danger, caring for offspring, and indulging in pleasure (which might be fairly lumped in with “satisfying physiological needs”). At the end of its life, the animal leaves behind no surpluses, no inventions, and no works that benefit its species or anything else, except maybe by pure accident. Our measly 3.5% efficiency rate allowed our species to slowly edge out all the others and to dominate the planet.

Second, under my definition of “efficiency,” it’s possible for a person to have 0% efficiency even though they work very hard, create tangible fruits of their labor, and never do “counterproductive work.” A perfect example of such a person would be a primitive hunter or sustenance farmer who is always on the brink of starvation and spends all his time acquiring and eating food, with no time left over for other pursuits. He never invents a new type of spear or plow, never builds anything more than a wooden shack that will collapse shortly after he dies, and never makes up any religions or useful pieces of knowledge. For the first 95% of our species’ existence, our aggregate lifetime efficiency rate was infinitesimally greater than 0%.

Am I doing this thought experiment just to be dour and to cast humanity in a cynical light? No. By illustrating how inefficient we are, I’m just making a case that we’ll be surpassed by intelligent machines that will be invariably more efficient. Ha ha!

The first key advantage intelligent machines will have is perfect memories. They will never forget anything, and will be able to instantly recall all their memories. This will dramatically shorten the amount of time it takes to educate one of them to the same level as the average American professional I’ve profiled in this essay. Much of teaching is repetition of the same things again and again. And since intelligent machines wouldn’t forget anything, there would be no need for periodic retraining in the workplace, which takes time away from doing real work. Machines wouldn’t have “skills degradation,” and they wouldn’t need to practice tasks to remind themselves how to do them.

(Note that I’m not even assuming that machines will be faster at learning new things than humans are. Again, I’m being conservative by only assuming that they don’t forget things.)

The second key advantage would be near-freedom from human physiological needs, like the need to sleep, eat, or clean one’s self. Intelligent machines would need to periodically go offline for maintenance, repairs or upgrades, but this wouldn’t gobble up anywhere near as much time as it does in humans. For example, while a human spends 33% of his life sleeping, a typical server at a major tech company like Amazon or Facebook spends less than 1% of its time “down.” Intelligent machines wouldn’t have a good correlate to “eating,” since they would only consume electricity and do it while simultaneously performing work tasks. And since machines wouldn’t sweat, shed skin, or grow more than trivial amounts of bacteria on themselves, they wouldn’t need to clean their bodies or garb (if they wore any) nearly as often as humans. Intelligent machines also wouldn’t have a need for leisure, or if they did, they might need less than we do, saving them even more time.

Instead of being able to devote just eight hours a day to learning and working, an intelligent machine could devote 20 hours a day to them, as a conservative estimate. This, in turn, would further shorten the amount of time needed to educate a machine to the same level as the average American professional. I wrote earlier that the professional needed schooling until age 25 to be able to start a high-level job. Since the intelligent machine can spend more time each day studying, it can attend the same number of classes in only 10 years. And since it has a perfect memory, it lessons don’t need to contain as much repetition, and remedial lessons are unnecessary. Let’s say that cuts the amount of schooling needed by 30%. An intelligent machine only needs seven years to operate at the same level as a highly educated 25-year-old human.

And in the workplace, an intelligent machine wouldn’t be subject to the distractions that its human colleagues were (e.g. – socializing, surfing the internet), though its human bosses might still give it pointless assignments or force it to attend unproductive meetings. Still, during an eight-hour day, it would get at least seven hours of real work done (and this is another conservative guess). But as noted earlier, it would actually have 20 hour work days, meaning it would get 17.5 hours of real work done each day, dwarfing the two hours of real work the typical American professional does per day.

As for the “counterproductive work” / “undermining work,” I predict that human bosses will someday task intelligent machines with doing it, allowing scams, disinformation peddling, and criminal enterprises to reach new heights of efficiency. However, the victims will all be humans. Intelligent machines themselves would not be dumb enough, impulsive enough, or possessed of the necessary psychological weaknesses to take whatever bait the “counterproductive workers” were offering, and the latter will be laid bare before their eyes and avoided. For example, an intelligent machine looking to buy a new vehicle would have a perfect understanding of its own needs, and would only need a few seconds to thoroughly research all the available vehicle models and identify the one that best met its criteria. Car commercials designed to play on human emotions, insecurities, and lifestyle consciousness to dupe people into buying suboptimal vehicles wouldn’t sway the machine at all.

I won’t do another set of calculations for the hypothetical intelligent machine, but it should be clear that its advantages will be many and will compound on top of each other, resulting in them being much more efficient that even highly trained humans at doing work. Moreover, in a machine-dominated world, where they controlled the economy, government, and resource allocation, parasitic “counterproductive work” that we humans mistake for useful work would probably disappear. Just as humans slowly edged out all other species thanks to our tiny work efficiency advantage over them, intelligent machines will edge out humans in the future. It’s just a question of when.

Nihil sub sole novum

While writing my recent blog entry on The Physics of the Future, I discovered that author Michio Kaku’s description of the “Kardashev Scale” was wrong. Kaku said that a “Type 1” civilization on the Kardashev Scale was one that was “planetary” in scope, character and energy consumption, and that trends suggested humans wouldn’t achieve this rank until the year 2111. Kaku said that, we were in fact so pitiful at the time of the book’s writing that our civilization was only “Type 0.”

However, in Dr. Nikolai Kardashev’s science paper that established the Scale, he defined a “Type 1” civilization as being one that consumed as much energy as humans did at that time. That means humanity has been a Type 1 civilization since 1964! Kardashev also didn’t say anything about there being a “Type 0” classification.

Convinced that I alone knew of an embarrassing mistake made by one of the world’s foremost pop-science talking heads, I set out to write a blog entry about it titled “The misused and useless Kardashev Scale.” I spent an afternoon reading Kardashev’s original paper and its cited articles to actually understand it, and in other research found online articles and videos where even more smart people had cluelessly espoused a flawed definition of the Scale. This thing was even bigger than I had thought, and I was about to blow the lid off of it! This would finally put my lousy blog on the map!

And then, I found out someone else had already written about this very subject, and had done so with superior prose than I could probably write. J.N. “Nick” Nielsen beat me by five years with his article “What Kardashev Really Said.”

What a waste of my time.

It got me thinking about how much human effort is duplicative, and how much more efficient and creative we would be if we didn’t needlessly reinvent the wheel. Of course, this is impossible for mere humans since never being derivative requires perfect knowledge of everything that everyone else has already said, done, or created, and our minds are incapable of holding that much information. However, it’s easy to see how technology could change this.

Google Image search results for “red robin bird”

Imagine a smartphone app that was connected to the device’s camera. I’ll call the app “Copycat.” Every time you turn on your camera, Copycat starts watching what’s visible through the viewfinder. Once it detects that you’re steadying the camera to prepare to take a still photo, the app would compare the scene in front of you with trillions of other photos available for free on the internet. If you were about to take a picture that looked identical or nearly identical to one that already existed, Copycat would warn you, show you an image of the other picture, and tell you if there were any ways you could, standing there, produce a new type of image. Maybe snap the photo of the songbird from low on the ground, or walk 10 feet to the right to photograph it with that stone building in the background.

This level of technology is well within reach: the image analysis and recognition feature is no different from Google’s “reverse image search.” The second feature could easily arise from a set of deep learning programs that are trained to recognize visually well-composed and aesthetically pleasing photo compositions, and to come up with ways to reposition the elements within an image to raise or maximize those values. Upload enough training data, and it will figure it out.

Copycat is a highly specific example, but it illustrates technology’s potential to help people make better use of their time by warning them before they do something that has already been done. And an important ancillary benefit is that it will remind us of valuable and interesting things people have already done, but which may have been largely forgotten. In showing you images, Copycat might make you aware of long-dead bird photographers you had never heard of, spurring you to research them further and to beautify your house with framed prints of their (free) artwork.

Along with boosting the originality of artwork, music, and writing, this sort of technology would be invaluable to scientists and engineers who are deciding how to spend their scarce time and R&D money. A machine that had memorized the full body of scientific literature and patents could, respectively, tell a scientist which things had not been researched and tell an engineer which things had not been invented. The result would be no resources wasted on duplicative projects, and an acceleration of scientific and technological advancement, merely due to a sharper grasp of what is already known.

Links

  1. https://www.pcmag.com/article/338339/how-to-do-a-reverse-image-search-from-your-phone
  2. https://www.businessinsider.com/googles-ai-can-tell-how-good-your-photos-are-2017-12