I think something I call a “Fake AGI” will be created in the next ten years. By simply improving existing LLMs with more data, marginally better algorithms, and bigger data centers, and then “wrapping” several specialized LLMs and computer programs together into a multi-module unit, it will become possible to build a sort of “Frankenstein” machine that would possess general intelligence. Well…most of the time.
The Fake AGI will still spit out nonsensical responses and suffer from hallucinations on occasion, sharply reminding us humans that its “mind” is fundamentally different from ours and that its intelligence is brittle. Further upgrades by its owners will roll the problem back, but never eliminate it entirely because the machine will be fundamentally incapable of general intelligence. For example, its Turing Test results will gradually improve, with it passing 99% of the time except for the 1% when it makes a totally nonsensical response that no human would. In time, its results would improve to 99.9%, then 99.99% and so on…but they would never be perfect.
But no matter how smart the machine got, no matter how well it mimicked human speech and emotion, there would still be the occasional mistakes. The strange answers and other random behaviors would be forever cited by critics as proof the machine was not really an intelligent being. Even people rejecting that stance would still admit that there was something alien about how the machine’s mind worked that we could never understand.
A “Real AGI” will require a totally different mental architecture, and several breakthrough algorithms, and will have a vastly simpler and more elegant code. I believe it is still at least 25 years away. However, from the human end user perspective, nothing might seem to change on the day the Frankenstein Fake AGI that answers correctly 99.999% of the time is switched off and the first Real AGI is switched on. The entity that they communicate with for work or pleasure will still sound the same, and the mistakes will have already become so rare that most people will have wrongly assumed the machine had been “generally intelligent” for years up to that point.
Every human being, and probably every life form with a brain, is inherently valuable. This is because our brain structures and past experiences uniquely shape the way we process data. One person’s subjective experience and perception of something is idiosyncratic to them. When they die, that bit of individuality is forever lost. Even the life of someone as lowly as a serial killer is valuable.
Brain scanning devices like BCIs will give us unparalleled insights into how the brains and minds of humans work. In the future, once the devices are cheap and common, they could be paired with personal assistant AIs to graph the exact mental strengths and weaknesses of each individual, allowing the machines to help them maximize their potential and to learn most effectively. The brain data could also be used, along with test data, observational data, and genetic information, to make highly accurate digital clones of people. The clones could persist even after their “originals” die.