Musings 6

It won’t be long before you’ll be able to feed a computer a script or the text of a book, and it will be able to produce a professional-quality audiobook or film. It would be so fascinating to finally see the great, unmade movies (like Stanley Kubrick’s epic biopic about Napoleon) or to see movies that stayed true to their written source material so they could be compared with what was actually made. Jurassic Park comes to mind as a famous movie that diverged greatly from the book. Imagine the same, CGI-generated characters in the same island setting, with the same style of soundtrack and cinematography, but with different dialog and different plot points than happened in the film we all know.

Will RV living and houseboat living be the norm in the future? Think about it: If humans won’t have jobs in the future, then they won’t have enough money to buy houses, making RVs and boats the only affordable option. Even a bus-sized recreational vehicle is only 1/3 the price of a typical American home, and a houseboat with the same internal volume is 2/3 the price. Also, without jobs, humans would have much less of a reason to stay tethered to one location and could indulge in their wanderlust. Additionally, thanks to VR being more advanced, people won’t need large TVs or computer monitors, easing the need for spacious living rooms.

Humans talking about the need to control AGI to ensure our dominance is not threatened are like Homo erectus grunting to each other about the need to keep Homo sapiens down somehow. It’s understandable for a dominant species to want to preserve its status, but that doesn’t mean such a thing is in the best interests of civilization.

It’s still unclear whether LLMs will ever achieve general intelligence. A lot of hope rests on “scaffolded systems,” which are LLMs that also have more specialized computer apps at their disposal, which they’re smart enough to know to use to solve problems that the LLM alone can’t.

Part of me thinks of this as “cheating,” and that a scaffolded system would still not be a true general intelligence since, as we assigned it newer and broader tasks, it would inevitably run into new types of problems it couldn’t solve but humans could because it lacked the tool for doing so.

But another part of me thinks the human brain might also be nothing more than a scaffolded system that is comprised of many small, specialized minds that are only narrowly intelligent individually, but give rise to general intelligence as an emergent property when working together (Marvin Minsky’s “Society of Mind” describes this). Moreover, we consider the average human to be generally intelligent even though there are clearly mental tasks that they can’t do. For example, through no amount of study and hard work could an average IQ person get a Ph.D in particle physics from MIT, meaning they could never solve cutting-edge problems in that field. (This has disturbing implications for how we’ve defined “general intelligence” and implies that humans actually just inhabit one point in a “space of all possible intelligent minds.”) So if an entity’s fundamental inability to handle specific cognitive tasks proves they lack general intelligence, then humans are in trouble. We shouldn’t hold future scaffolded systems to intelligence standards we don’t hold ourselves to.

Moreover, it’s clear that humans spend many of their waking hours on “mental autopilot,” where they aren’t exercising “general intelligence” to navigate the world. An artificial mind that spent most of its time operating in simpler modes guided by narrow AI modules could therefore be just as productive and as “smart” as humans in performing routine and well-defined tasks.

Leave a Reply

Your email address will not be published. Required fields are marked *