Dr. George Wald shows that having a Nobel Prize doesn’t mean you know everything

 
Dr. George Wald

This little gem comes from the 1979 Biblical Doomsday “documentary” The Late, Great Planet Earth:

‘I am one of those scientists who finds it hard to see how the human race is to bring itself and bring the human enterprise much past the year 2000.’

That dire prediction was made by famed scientist Dr. George Wald, who was by all accounts a brilliant man who won a Nobel Prize for his work.

The phrase “much past” makes Wald’s dooms-date ambiguous, though I consider it a failed prediction at this point, since we’re 17 years into new century without civilization collapsing, and without any evidence it’s about to. To the contrary, since Wald’s quoted statement, we’ve managed to add three billion more humans to the planet while also sharply reducing global rates of malnourishment and absolute poverty. Across a wide variety of metrics, the human race has grown larger, healthier, richer, and less violent, and there are no signs the trends will abate anytime soon.

Making accurate future predictions is always fraught with uncertainty, but it becomes especially conjectural when people start making predictions about things outside of their areas of expertise. Wald’s mastery of biochemistry left him with no better a grasp of the human race’s trajectory than an average person, and his inclusion in this religious doomsday documentary is an example of the “Appeal to Authority” logical fallacy, in which a person’s credentials are erroneously substituted for reasoned and fact-based argumentation.

In my recent blog entry about Richard Branson, I pointed out that predictions should not be trusted if the person making them stands to tangibly benefit if other people believe them, and to that I’ll add that predictions should not be trusted if the person making them doesn’t have relevant expertise. Moreover, name-dropping and credential-dropping should never substitute for independently verifiable facts and transparent methodologies.

UPDATE: (8/28/2017) Coincidentally, I just came across the article, “A Nobel Doesn’t Make You an Expert: Lessons in Science and Spin.” The author (a former New York Times science editor) uses the example of James Watson, who won a Nobel Prize for co-discovering the structure of DNA, to show that the opinions and predictions of “experts” are often of little value when they pertain to subjects outside their areas of expertise. In 1998, Dr. Watson erroneously predicted that cancer would be cured within two years. The author also sets forth a few tips for evaluating predictions from “experts,” which partly overlap with my own and which I’ll summarize here:

  1. Ensure that the person’s education and professional credentials are relevant. A useful measure of a scientist’s level of expertise is the quantity and quality of the peer-reviewed papers they have produced.
  2. Be suspicious when experts have conflicts of interest that may bias their opinions and predictions.
  3. Remember that experts whose theories fall far outside the scientific mainstream are usually (but not always) wrong.
  4. Be very suspicious of scientists and other experts who feel aggrieved or persecuted by the mainstream of their professions. If an expert with an outlier theory also believes there is a conspiracy against him or her, it should raise a red flag in your mind.

Links:

https://undark.org/article/cornelia-dean-making-sense-of-science/

A Tale of Two Buying Guides


I got my hands on several years’ worth of Consumer Reports Buying Guides, and thought it would be useful to compare the 2006 and 2016 editions to broadly examine how consumer technologies have changed and stayed the same. Incidentally, the 2006 Guide is interesting in its own right since it provides a snapshot of a moment in time when a number of transitional (and now largely forgotten) consumer technologies were in use.

Comparing the Guides at a gross level, I first note that the total number of product types listed in the respective Tables of Contents declined from 46 to 34 from 2006 to 2016. Some of these deletions were clearly just the results of editorial decisions (ex – mattresses), but some deletions owed to entire technologies going obsolete (ex – PDAs). Here’s a roundup of those in the second category:

  • DVD players (Totally obsolete format, and the audiovisual quality difference among the Blu-Ray player models that succeeded them is so negligible that maybe Consumer Reports realized it wasn’t worth measuring the differences anymore)
  • MP3 players (Arguably, there’s still a niche role for small, cheap, clip-on MP3 players for people to wear while exercising, but that’s it. Smartphones have replaced MP3 players in all other roles. The classic iPod was discontinued in 2013, and the iPod Nano and Shuffle were discontinued last month.)
  • Cell phones (AKA  “dumb phones.” The price difference between a cheap smartphone and a 2006-era clamshell phone is so small and the capabilities difference so great that it makes no sense at all to buy the latter.)

    Consumer Reports recommended the Samsung MM-A700 in 2006

  • Cordless phones
  • PDAs (Made obsolete by smartphones and tablets)
  • Scanners (Standalone image and analog film scanners. These were made obsolete by printer-scanner-copier combo machines and by the death of 35mm film cameras.)

Here’s a list of new product types added to the Table of Contents between 2006 and 2016, thanks to advances in technology and not editorial choices:

  • Smartphones
  • Sound bars
  • Steaming Media Players (ex – Roku box)
  • Tablets

As an aside, here are my predictions for new product types that will appear in the 2026 Consumer Reports Buying Guide:

  • 4K Ultra HD players (Note: 8K players will also be commercially available, but they might not be popular enough to warrant a Consumer Reports review)
  • Virtual/Augmented Reality Glasses
  • All-in-one Personal Assistant AI systems (close to the technology shown in the movie Her)
  • Streaming Game Consoles (resurrection of the OnLive concept)–it’s also possible this capability could be standard on future Streaming Media Players
  • Single device (perhaps resembling a mirrorless camera) that merges camcorders, D-SLRs, and larger standalone digital cameras. This would fill the gap between smartphone cameras and professional-level cameras.

It’s also interesting to look at how technology has (not) changed within Consumer Reports product types conserved from 2006 to 2016:

  • Camcorders. Most of the 2006 models still used analog tapes or mini-DVDs, and transferring the recordings to computers or the internet was complicated and required intermediary steps and separate devices. The 2016 models all use flash memory sticks and can seamlessly transfer their footage to computers or internet platforms. The 2016 models appear to be significantly smaller as well.

    For a brief time, there were camcorders that recorded footage onto internal DVD discs.

  • Digital cameras. Standalone digital cameras have gotten vastly better, but also less common thanks to the rise of smartphones with built-in cameras. The only reason to buy a standalone digital camera today is to take high-quality artistic photos, which few people have a real need to do. Coincidentally, I bought my first digital camera in 2006–a mid-priced Canon slightly smaller than a can of soda. Its photos, which I still have on my PC hard drive, still look completely sharp and are no worse than photos from today’s best smartphone cameras. Digital cameras are a type of technology that hit the point of being “good enough for all practical purposes” long ago, and picture quality has experienced very little meaningful improvement since. Improvements have happened to peripheral qualities of cameras, such as weight, size, and photo capacity. At some point, meaningful improvements in those dimensions of performance will top out as well.
  • TV sets. Reading about the profusion of different picture formats and TV designs in 2006 hits home what a transitional time it was for the technology: 420p format, plasma, digital tuners, CRTs, DLPs. Ahhh…brings back memories. Consumers have spoken in the intervening years, however, and 1080p LCD TVs are the standard. Not mentioned in Consumer Reports is the not-entirely-predictable rejection of 3D TVs over the last decade, and a revealed consumer preference for the largest possible TV screen at the lowest possible cost. It turns out people like to keep things simple. I also recall even the best 2006-era digital TVs having problems with motion judder, narrow frontal viewing angles, and problems displaying pure white and black colors (picking a TV model back then meant doing a ton of research and considering several complicated tradeoffs).

    DLP TVs were not as thick or as heavy as older CRT TVs, but that wasn’t saying much. They briefly competed with flatscreen TVs based on LCD and plasma technology before being vanquished.

  • Dishwashers. The Ratings seem to indicate that dishwashers got slightly more energy efficient from 2006-16 (which isn’t surprising considering the DOE raised the energy standards during that period), but that’s it, and the monetized energy savings might be cancelled out by an increase in mean dishwasher prices. The machines haven’t gotten better at cleaning dirty dishes, their cleaning cycles haven’t gotten shorter, and they’re not quieter on average while operating.
  • Clothes washers. Same deal as dishwashers: Slight improvement in energy efficiency, but that’s about it.
  • Clothes dryers. Something strange has happened here. “Drying performance” and “Noise” don’t appear to have improved at all in ten years, but average prices have increased by 30 – 50%. I suspect this cost inflation is driven by induced demand for non-value-added features like digital controls, complex permutations of drying cycles that no one ever uses, and the bizarre consumer fetish for stainless steel appliances. Considering dishwashers, clothes washers, and dryers together, we’re reminded of how slowly technology typically improves when it isn’t subject to Moore’s Law.
  • Autos. This section comprises almost half of the page counts in both books, so I don’t have enough time to even attempt a comparison. That being said, I note that “Electric Cars/Plug-In Hybrids” are listed as a subcategory of vehicles in the 2016 Buying Guide, but not in the 2006 Buying Guide.

Links

  1. https://www.cnet.com/news/why-would-anyone-want-an-mp3-player-today/
  2. http://dishwashers.reviewed.com/features/why-obamas-dishwasher-efficiency-crusade-makes-sense
  3. https://www.cnet.com/news/rip-rear-projection-tv/

Richard Branson still isn’t in space

Billionaire Richard Branson in front of a scale model of his experimental space ship

From December 2013:

Sometime in 2014, entrepreneur Richard Branson and his two children aim to be on the first commercial flight of SpaceShip Two, Virgin Galatic’s rocket for propelling eight people 100 kilometers above the Earth. (SOURCE)

Sadly, SpaceShip Two broke up during a test flight in October 2014, killing one of its pilots. A replacement was constructed, and as of August 2017, it is undergoing sub-orbital test flights, but Branson and his children haven’t used it or any other craft to go into space (in fact, Virgin Galactic has only had three manned spaceflights in its history, all taking place in 2004). Yet hope springs eternal, and there’s a new deadline:

One area Branson has been less keen on speaking out on recently has been his project to take people into space. Virgin Galactic, as the fledgling business is known, has been beset by technical and other difficulties, not least the fatal crash of its SpaceShipTwo in California’s Mojave Desert in October 2014.

Despite the idea proving popular with future travellers – some 500 potential customers have spent $250,000 on reserving their spot on one of its trips– it is perhaps the one business he has found the hardest to get off the ground.

After the crash, Branson said his dream of space travel may have ended. But Galactic, under boss and former NASA chief of staff George Whitesides, has regrouped, redoubled its focus on safety, and appears to be making progress.

…“The test programme is going really well, and as long as we’ve got our brave test pilots pushing it to the limit we think that after whatever it is, 12 years of hard work, we’re nearly there.”

When exactly will he be nearly there? After all, Branson himself – and some of his family – have committed to being on the first flight.

“Well we stopped giving dates,” he confesses. “But I think I’d be very disappointed if we’re not into space with a test flight by the end of the year [2017] and I’m not into space myself next year [2018] and the progamme isn’t well underway by the end of next year.” (SOURCE)

This underscores the need to always be skeptical of future predictions, even if they come from people who have been enjoying a lot of recent success and who appear to know what they’re talking about. Skepticism is doubly warranted when the predictions are self-serving and possibly designed to boost interest and investment in the person’s business ventures (i.e. – inflate the stock price of the predictor’s company, of which he is the majority shareholder). On that note, I’m a fan of Elon Musk, but I fear he might be dangerously over-reliant on self-generated hype to keep his portfolio of businesses going. At some point, his investors will lose faith in him without bona fide profits.

Links

https://www.theverge.com/2014/6/21/5830526/spaceshipone-commercial-space-flight-ten-year-anniversary

Why aren’t pharmacies automated?

I had to swing by the local pharmacy last weekend to get a prescription. There was no line, so the trip was mercifully short and efficient. But as usual I couldn’t help shake my head at the primitive, labor-intensive nature of the operation: Human beings work behind the counter, tediously putting pills into little orange bottles by hand. The pharmacist gets paid $121,500/yr to “supervise” this pill-pouring and to make sure patients aren’t given combinations of pills that can dangerously interact inside their bodies, even though computer programs that automatically detect such contraindications have existed for many years.

We have self-driving cars and stealth fighters, computing devices improve exponentially in many ways each year, and a billion people have been lifted out of poverty in the last 20 years. Pharmacies, on the other hand, don’t seem to have progressed since the 1980s.

For the life of me, I can’t understand the stagnation. Pharmacies seem ideally suited for automation, and I don’t see why they can’t be replaced with large gumball machines and other off-the-shelf technologies. Just envision the back wall of the pharmacy being covered in a grid of gumball machines, each containing a unique type of pill. Whenever the pharmacy received an order for a prescription, the gumball machine containing the right pills would automatically dispense them down a chute and into an empty prescription bottle. The number and type of pills in the bottle would be confirmed using a camera (the FDA requires all pills to have unique shapes, colors, and imprinted markings), a small scale, and some simple AI visual pattern recognition software to crunch the data. This whole process would be automated. Empty pill bottles would be stored in a detachable rotary clip or something (take inspiration from whatever machines Bayer uses to fill thousands of bottles of aspirin per day). Sticky paper labels would be printed as needed and mechanically attached to the pill bottles.

Every morning, a minimum-wage pharmacy technician would receive boxes of fresh pills from the UPS delivery man and then pour the right pills into the matching gumball machines. Everything would be clearly labeled, but to lower the odds of mistakes even further, the gumball machine globes would have internal cameras and weight scales to scan the pills that were inside of them and to verify the human tech hadn’t mixed things up. (And since the gumball machines would continuously monitor their contents, they’d be able to preemptively order new pills before the old ones ran out.) The pharmacy tech would spend the rest of the day handing pill bottles to customers, verifying customer identities by looking at their photo IDs (especially important for sales of narcotics), and swapping out rotary clips of empty pill bottles. If a customer were unwittingly buying a combination of medications that could harmfully interact inside their body, then the pharmacy computer system would flag the purchase and tell the pharmacy technician to deny them one or other type of pill.

I’m kind of content to stop there, as automating just those tasks would be a huge improvement over the current way of business (one human could do the work currently done by three), but here some more ideas:

  • Confirming a customer’s identity before giving them their pills could also be automated by installing a machine at the front counter that would have a front-facing camera and a slot for inserting a photo ID card. The machine would work like the U.S. Customs’ Automated Passport Control machines, and it would use facial recognition algorithms to compare the customer’s face with the face shot on their photo ID. I’ve used the APC machines during overseas trips and never had a problem.
  • The act of physically handing prescription bottles to customers could also be automated with glorified vending machine technology, or a conveyor belt, or a robot grabber arm.
  • Eighty percent of pharmacy customers are repeat buyers who are already in the computer system and are just picking up a fresh bottle of pills because the old bottle was exhausted. There’s no need for small talk, questions, or verbal information from the pharmacist about this prescription they’ve been taking for months or years. That being true, the level of automation I’ve described would leave pharmacists with a lot of time to twiddle their thumbs during the intervals between the other 20% of customers who need special help (e.g. – first-time customer and not in the patient database, have questions about medications or side effects). Having a pharmacist inside every pharmacy would no longer be financially justified, and instead each pharmacy could install telepresence kiosks (i.e. – a station with a TV, sound speakers, a front-facing camera, and a microphone) through which customers could talk to pharmacists at remote locations. With this technology, one pharmacist could manage multiple pharmacies and keep themselves busy.
An Automated Passport Control machine in use

As far as I can tell, the only recent advances in the pharmacy/pill selling business model have been 1) the sale of prescriptions through the mail and 2) the ability to order refills via phone or Internet. If you choose to physically go into a pharmacy, the experience is the same as it was when I was a kid.

Is there a good reason it has to be the way it is now? I suspect the current business model persists thanks to:

  1. Political lobbying from pharmacists who want to protect their own jobs and salaries from automation (see “The Logic of Collective Action”).
  2. Unfounded fears among laypeople and politicians that automated pharmacies would make mistakes and kill granny by giving her the wrong pills. The best counterargument is to point out that pharmacies staffed by humans also routinely make those same errors. Pharmacists will also probably chime in here to make some vague claim that it’s more safe for them to interact with customers than to just have a pharmacy tech or robot arm hand them the pills at the counter.
  3. Fears that automated pharmacies will provide worse customer service. Again, 80% of the time, there’s no need for human interaction since the customer is just refilling a prescription they’ve been using for a long time, so “customer service” doesn’t enter into the equation. It’s entirely plausible that a pharmacist could satisfy the remaining 20% of customer needs through telepresence just as well as he or she would on-site.
  4. High up-front costs of pharmacy machines. OK, I have no experience building pharmacy robots, but my own observations about the state of technology (including simple tech like gumball machines) convince me that there’s no reason these machines should be more expensive than paying for human labor. Even if we assume that each gumball machine costs an exorbitant $1,000, you could still buy 121 of them for the same amount a typical pharmacist would make in a year, and each gumball machine would last for years before breaking. It’s possible that pharmacy machines are unaffordable right now thanks to patents, which is a problem time will soon solve.

Well, at the end of my pharmacy visit, I decided to just ask the pharmacist about this. She said she knew of pharmacy robots, and thought they were put to good use in hospital pharmacies and military bases, but they weren’t suited to small retail pharmacies like hers because they were too expensive and took up too much physical space. I would have talked to her longer, but there was a long line of impatient people behind me waiting to be handed their pill bottles.

Review: “Killzone” (the PS2 game)

[Below is a review of the video game “Killzone,” which I wrote while in college, over ten years ago. While I admit it’s a little silly to hold a video game to such scrutiny, my conclusions are still valid, and this piece is significant because it was my first attempt to put part of my own future vision in writing, even if it is a critique of someone else’s vision.

This repost will be the first in a recurring series of film and video game “Reviews” that I’ll be doing to assess the feasibility of whatever futuristic elements they depict. 

I’ve edited this Killzone review a little for clarity and brevity. ]

A couple days ago I finally finished the game “Killzone” for PS2, and I have some thoughts about it. First, a bit of background: “Killzone” takes place at some unspecified point in the distant future when mankind has mastered interstellar space travel and colonized two new planets, Vekta and Helghan. Vekta looks identical to Earth, while Helghan is barren and polluted.

Over the generations, the humans of Helghan–known as the Helghast–were genetically mutated by their harsh environment to the point of being barely-human freaks. The Helghast are also warlike and have a tradition of military leadership. At the start of the game, the cool Intro video shows the Helghast army invade Vekta by surprise. While the motivations for this aren’t clearly stated, after reading the “Killzone” booklet I believe it was probably done to obtain resources that Helghan lacks.

This is where you, the player, come in. You play a soldier named “Templar,” serving in Vekta’s ground forces (called the “ISA”). As the game progresses, three other character join your team: Luger is the woman, Rico is the heavy weapons guy and Hakha is the Helghast/human “hybrid.” Among them, Templar is the natural leader and all-around balanced fighter while the other three have specific combat specialties. By the midpoint of the game, you have the option of playing as any character you wish at each level. I thought this was a pretty cool touch because each character has unique abilities and weapons that make the levels a different experience depending on whom you choose. Anyway, you blow away a bunch of Helghast and save the planet–from the first invasion wave.

Along with the the selectable player option, I also liked how “Killzone” was neither too short (“Max Payne 2”) nor too long (“Halo 2”). However, there were some areas needing serious improvement. The gameplay could be awkward: You can’t jump period, making it impossible for your big, soldier self to clear small obstacles like a Jersey Wall; grenades are almost impossible to aim and take about 10 seconds to throw and detonate; climbing ladders is an ordeal; and aiming the sniper rifle gives new definition to the word “tedious.” While the A.I. is an O.K. challenge, the enemies aren’t varied enough and there are only like three different types of Helghast soldiers. Your fellow A.I. squad mates are of inconsistent help during gameplay. The game’s story was also pretty boring. Overall, “Killzone” is playable but falls short of what it could have been.

I also noticed some crude demographic stereotypes in the game. On your team, for instance, the leader is Templar: the handsome younger white guy. Luger, being a woman, is weaker in terms of health and physical strength and has to rely on her sniper pistol and sneaking skills as she runs around in her skin tight black jumpsuit killing bad guys. Rico, being the only “colored” person on the team (he looks Latino), is big, tough, dumb, vulgar, and slow, and fittingly starts each mission with a big machinegun/rocket launcher while his teammates have smaller, more precise weapons. Hakha’s bald head and pale skin cast him as the stereotypical older white man, and he predictably uses received pronunciation, quotes passages from literature to the rest of the team, and knows the most about computer and electronics systems.

“Killzone” also presents an extremely incongruous vision of the future. Let’s begin: We are told at the beginning of the game that humans have inhabited Helghan and Vekta for several generations, which I’ll very conservatively assume means “50 years.” Thus, 50 years before the start of “Killzone,” mankind had already 1) mastered faster than light space travel and 2) built spacecraft cheaply enough to allow mass numbers of people to be transported to Vekta and Helghan. The requisite scientific breakthroughs for these two technological advancements will almost certainly not arrive before the middle of the 21st century, and in fact may prove totally elusive. Considering the facts and estimates in this paragraph, we are left to conclude that “Killzone,” at the very earliest, takes place 100 years in the future–2106 A.D.

Problematically, the world of “Killzone” ignores all of the other scientific breakthroughs and new technologies that will also be made by 2106. For instance, all of the weapons used in the game are simply 20th-century firearms, but with cool-looking exteriors that make them look advanced when in fact they’re not. By 100 years from now, small arms will certainly be much more advanced. I wouldn’t be surprised if directed energy weapons or EMP-powered railguns had totally superseded firearms. I also expect small arms to come with built-in sensors, computers and actuators that allow the guns to sense which target their shooter wanted to hit, and to automatically aim themselves at it. All you would have to do is aim at someone’s body, pull the trigger, and the gun would make sure the bullet went directly through the person’s brain or heart. Not just that, but through the part of the organ that caused the most damage and the most immediate incapacitation. The gun’s computer would also automatically shuffle between different types of ammunition to inflict maximum damage on the target and could also automatically adjust the velocity of the projectile. As a result, the small arms of 2106 will require almost no training to be used effectively. And if they incorporated nanotechnology, future guns might be able to make their own bullets and conduct self-repairs and maintenance, meaning the weapons would be self-cleaning and would last almost forever.

But the more fundamental problem with “Killzone” is that humans will be obsolete on the battlefield by 2106. Think about it. Even the most hardcore, well-armed, futuristic supersoldier still needs hours a day to eat, sleep and take care of other personal needs. He or she still feels pain, questions orders, makes mistakes, and is subject to irrational and unpredictable emotions. A machine, on the other hand, would suffer from none of these faults. Machines are also expendable whereas humans are not, meaning that it would be easier politically to wage a war if a nation’s casualties were solely machines. A human still needs at least 16 years of growth and development to be physically and mentally able to handle the demands of combat, followed by months or even years of specialized military training. A combat machine could be built in an afternoon and then programmed with its military training in a few minutes. Clearly the future of warfare belongs to machines. By 2106, fighting machines will make war a cruelly unfair environment for human beings, where only the most desperate or foolhardy members of our species will dare set foot. Without direct human participation, the battlefield will become totally devoid of all the camaraderie, honor and bravery that stand today as the few positive attributes of war, and warfare will complete its evolution towards becoming a totally cold and anonymous endeavor.

A Predator drone aircraft in flight. The Predator is a remotely controlled aircraft that first entered service with the U.S. Air Force in 1995 as a reconnaissance (spy) plane. In 2001, it was armed with Hellfire anti-tank missiles and was successfully used against Taliban forces in Afghanistan. It remains in use. A Predator drone costs only $3.5-4.5 million to manufacture. Compare that to an F-16 C/D, which costs almost $20 million.

It probably looks petty for me to spend so much effort lambasting “Killzone” because it’s just a video game. That is certainly true, but the fact remains that games like “Killzone” embody and reinforce the ill-informed visions of the future held by most people, and I believe that critiquing the game is the most immediate way I can help people examine their own ideas. I think few people realize how unrealistically our future is portrayed in popular culture. Things like “Star Trek,” “Star Wars” and “Halo 1 & 2” have created the preposterous misconception that the universe is filled with humanoid, alien intelligent life forms that are all +/- 50 years our same level of technology. Considering 1) the age of the Universe (13.5 billion years old), 2) the fact that the planets are oldest at its center of the Universe and youngest at its fringes thanks to the Big Bang, 3) the fact that 3.5 billion years separated the appearance of the first primitive bacteria to the evolution of intelligent life on Earth, and 4) that chance that cosmic events have seriously altered the pace of Earthly evolution, we can conclude that the Universe is certainly populated with intelligent species of vastly different levels of technology.

To have human space explorers discover an intelligent alien species close to our level of technology is akin to having you randomly pick a name out of a three-inch thick phone directory and finding out that that person shares your same year, date, hour, minute, and second of birth. It is overwhelmingly likely that you will instead randomly pick someone who is different from you, and similarly, it is overwhelmingly likely that alien civilizations we encounter will be vastly older or younger than we are and thus either vastly stronger or weaker than we. So this recurring sci-fi trope where humans are fighting future space wars with aliens is ludicrous: any war with an alien species is certain to be very lopsided in favor of one side, and hence very short. This is actually where “Killzone” gets a bit of credit, since the plot has humans from different planets fighting one another. Sadly, I can see that as realistic even in 2106.

I also take issue with “Killzone” and most other sci-fi portraying the racial makeup of our descendants as being essentially the same as it is in contemporary America: The majority are white people, with smaller, roughly equally sized minorities of blacks, Asians and Hispanics. NO. Eighty percent of the current world population is nonwhite, and in the future, once Third World areas have closed the economic and technology gap with the West, we will see the world’s true racial character more vividly in everyday life. Multiracial people will also be much more common.

Another demographic shift very rarely portrayed in future sci-fi is the graying of the population. Average human lifespans have been increasing steadily for more than 100 years, and there is no reason to expect this trend to abate. By 2106, expect average people to be living to 120, if not indefinitely. Moreover, they will be stay active much longer thanks to better medical technologies. The means to slow, halt and reverse the effects of age will probably be achieved. “Killzone,” like all other Sci-Fi depictions of the future, fails to recognize the societal implications of these new technologies. Older people will look and feel DECADES younger than they are chronologically.

Will future technology make the sexes equal?

Much is made in the media about the prevalence of sexism and sex-based inequality in the world. In the long run, won’t technology close whatever gaps there are and solve these problems? Consider:

  • Job automation will eliminate the gender pay gap. Today, men make more money than women in almost every type of occupation. However, if machines end up taking over 100% of all gainful jobs (“gainful” = someone else is willing to pay for the product of your labor; so volunteer jobs are excluded), then all humans will be earning $0 and there will be no gender pay gap.
  • Job automation will also eliminate the gender labor force participation rate gap and gender unemployment rate gap. Today, men are overall more likely to work outside the home, but they’re also more likely to be unemployed. Again, since no humans will have gainful jobs in the future,  this disparity will vanish.
  • Robot labor will eliminate the gender household chore gap. Tasks like washing laundry and cooking food generally fall harder on women in households, but if each house has a robot servant, none of the humans have to do anything, and this gap also disappears. Robot servants could also act as babysitters and free up time for their parents (though I’d imagine they’d have little real need for this since they wouldn’t have real jobs anymore), which would benefit women more since they also shoulder a heavier share of looking after their children than men do.
  • If there is any educational gap* in the future, it will become much less significant because level of education will no longer parlay to a job or salary. High school graduates will earn the same as particle physics Ph.D’s: Nothing. And without boring but high-paying STEM jobs to look forward to, more men might pursue degrees in the Humanities, which today are dominated by females. (*Arguably, the educational gap has already closed in the U.S. and women are, by some metrics, the more educated sex.)
  • The gender wealth gap would also fade away over time thanks to estates being divided up among multiple heirs who couldn’t expand their fortunes since the whole economy would be controlled by machines. For example, let’s say Gramps builds up a net worth of $1 million and then dies in the year 2100, which is the same year the human unemployment rate finally hits 100% and machines have taken over all gainful jobs. Gramps’ fortune is divided equally among his wife (Granny) and three adult children (Alan, Belinda, and Chuck). None of them have jobs, so the money is a (temporary) Godsend. Granny uses her $250k for medical and nursing home expenses, and all her doctors and aides are machines, so the money effectively flows out of human control. Granny spends a little each year until she dies with close to $0 left. Alan wastes his $250k on frivolous stuff like fancy restaurant meals and gambling within three years, and again, all the workers at the establishments he goes to are machines. Belinda uses her money to buy a more expensive house, which effectively “locks in” her $250k as home equity and a permanent net worth increase, but she finds it impossible to get any richer than that since neither she nor her husband can get paying jobs. Machines do everything important, and all they can do is collect welfare, spend time with their kids, pursue hobbies, and take free online courses (taught either by machines or human volunteers). Belinda and her husband eventually spend most of their money on medical bills, and when they die, the amount of money they pass to their kids is much less than what they got from Gramps. Chuck uses his $250k to finally indulge his lifelong dream to start a bar/restaurant. He has a perfect business plan, a menu crafted by professional chefs, a prime location, very tasteful decor, and top-of-the-line robot chefs and waiters. Food critics give “Chuck’s Bar” rave reviews, as do average patrons on platforms like Yelp. However, Chuck has a problem: One block away, there’s an identical bar/restaurant, but instead of being owned by a human, it’s owned by an intelligent robot named “RoboChuck.” Because RoboChuck is a machine, he isn’t materialistic, doesn’t need to sleep or take breaks, is cool with using a small closet in his restaurant for a residence instead of buying a house, and is fine working 24/7 for a $10,000 yearly salary. As a result, “RoboChuck’s” has lower overhead costs and can sell the exact same food and drinks as “Chuck’s” at lower prices.  No matter how hard Chuck works, he can’t make up for his inherent inferiority to RoboChuck, nor can he find a way to ameliorate the extra costs that he personally inflicts on his business. Because of the price difference, customers gradually drift away to RoboChuck’s. In spite of his talent, dedication, and seemingly perfect business plan, Chuck’s bar/restaurant goes bankrupt after a few years, and he loses all $250k of his inheritance with it. His landlord and all of his creditors are machines. After that, Chuck bitterly grasps the reality of the new economy and takes up watercolor painting in his government-provided apartment. In an economy where machines do all the real work, human net worth invariably goes to $0, unless backstopped by some government-mandated wealth redistribution (i.e. – machine earnings are taxed and given to humans as welfare payments). Net worth inequalities between human males and females would disappear.

Random idea: “Smart Venetian Blinds”

A typical Venetian blind

My idea: Solar/battery powered, self-adjusting Venetian blinds

  • The slats would have paper-thin, light-colored, flexible solar panels on the sides facing the outside of the house. They wouldn’t need to be efficient at converting sunlight to electricity. The sides facing the inside of the house would be white.
  • The headrail would contain a tiny electric motor that could slowly open or close the blinds; a replaceable battery; a simple photosensor; a thermometer; and a small computer with WiFi.
  • The solar panels on the outward-facing sides of the slats would harvest direct and ambient sunlight to recharge the battery.
  • The computer would be networked with other sensors in the house, and would know 1) when humans were inside the house, 2) when the heating and cooling systems were active, and 3) what the temperature was outside the house (this could be determined by checking internet weather sites).
  • Based on all of those data, the Venetian blinds would automatically open or close themselves to attenuate the amount of sunlight shining through the windows. Since sunlight heats up objects, controlling the sunlight would also control the internal house temperature.
  • During hot summer days, the blinds would completely close to block sunlight from entering the house, keeping it cooler inside. During cold winter days, the blinds would open.
  • If the blinds were trying to maximize the amount of sunlight entering a house, they could continuously adjust the angling of the slats over the course of a single day to match the Sun’s changing position in the sky.
  • The photosensors and thermometers in each “Smart Venetian Blind” could also help identify window leaks and windows that were accidentally left open.
  • The blinds could also be used for home security if programmed to completely close each night, preventing potential burglars from looking inside the house. The homeowner could use a smartphone app to control all the blinds and set this as a default preference. Sudden changes in temperature at a particular window during periods where no one was in the house could also be registered as possible break-ins.
  • Humans could, at any time, manually adjust the Venetian blinds by pulling on the cord connected to the headrail. The computer would look for patterns in this behavior to determine if any user preferences existed, and if so, the blinds would try to incorporate them into the standard open/close daily routine.
  • The Smart Venetian Blinds could function in a standalone manner, but ideally, they would be installed in houses that had other “Smart” features. All of the devices would share data and work together for maximum efficiency.
  • Every month, the homeowner would get a short, simple email that estimated how much money the blinds had saved them in heating and cooling costs. Data on the blinds’ lifetime ROI would also be provided.
Smart Venetian Blinds with vertical slats could be installed over large windows and glass doors.

UPDATE (6/28/2018): A company called “SolarGaps” beat me to it! Looks like they’ve been in business since early 2017.
https://youtu.be/whrroUUWCYo

Have machines created an “employment crisis”?

Dovetailing off of yesterday’s blog entry (“Teaching more people to code isn’t a good jobs strategy”), I’d like to examine an assumption implicit in the first passage I quoted:

‘[Although] I certainly believe that any member of our highly digital society should be familiar with how these [software] platforms work, universal code literacy won’t solve our employment crisis any more than the universal ability to read and write would result in a full-employment economy of book publishing.’

It’s a little unclear what “employment crisis” the author is talking about since the U.S. unemployment rate is a very healthy 4.4%, but it probably refers to three things scattered throughout the article:

  1. Skills obsolescence among older workers. As people age, the skills they learned in college and early in their careers get less useful because technologies and processes change, but the people fail to adapt. Accordingly, their value as employees declines, along with their pay and job security. This phenomenon is nothing new: in Prehistoric times, the same “career arc” existed, with people becoming progressively less useful as hunters and parents upon reaching middle age. Older workers faced the same problems in more recent historical eras when work entailed farming and then factory labor. That being the case, does it make sense to describe today’s skills obsolescence as a “crisis”? “Just the way things are” is more fitting.
  2. Stagnation of real median wages in the U.S. Adjusted for inflation, the median American household wage has barely increased since the 1970s. First, this isn’t in the strictest sense of the word an “employment crisis” since it relates to wages and not the availability of employment. “Pay crisis” might be a better term. Second, much of the stagnation in median pay evaporates once you consider that the average American household has steadily shrunk since the 1970s: Single-parent households have become more common, and in such families, there is only one breadwinner. Knowing whether someone is talking about median wages per worker or median wages per household is crucial. Third, this only counts as a crisis if you ignore the fact that many things have gotten cheaper and/or better since the 1970s (cars, personal electronics, many forms of entertainment, housing except in some cities), so the same salary can support a higher standard of living now. Most of that owes to technological improvement.

    Note the data stop in 2012, when the U.S. economy was still recovering from the Great Recession

  3. Automation of human jobs. Towards the end of the article, it becomes clear this is what the author is really thinking about. He cites research done by academics Erik Brynjolfsson and Andrew McAfee as proof that machines have been hollowing out the middle class and reducing incomes and the number of jobs. I didn’t look at the source material, but the article says they made those comments in 2013, which means their analysis was probably based on economic data that stopped in 2012, in the miserable hangover of the Great Recession when people we openly questioning whether the economy would ever get back on its feet. I remember it well, and specifically, I remember futurists citing Brynjolfsson and McAfee’s research as proof that the job automation inflection point had been reached during the Great Recession, explaining why the unemployment rate was staying stubbornly high and would never go down again. Well they were wrong, as today’s healthy unemployment numbers and rising real wages demonstrate. So if the article’s author thinks that job automation is causing “our employment crisis,” then he has failed to present proof the latter exists at all.

For the record, I do believe that machines will someday put the vast majority of humans–perhaps 100% of us–out of gainful work. When they finally do that, we will have an “employment crisis.” However, I have yet to see proof that machines have started destroying jobs faster than new ones are created, so speaking of an automation-driven “employment crisis” should be done in the future tense (which the author doesn’t). Right now, “our employment crisis,” like so many other “crises” reported in the media, simply doesn’t exist.

Links

  1. https://www.fastcompany.com/3058251/why-learning-to-code-wont-save-your-job
  2. https://fivethirtyeight.com/features/the-american-middle-class-hasnt-gotten-a-raise-in-15-years/

Teaching more people to code isn’t a good jobs strategy

Amongst the sea of spilled ink about America’s purported “STEM shortage,” and policy proposals to address this disaster by training preschoolers to code,  this article stands out as one of the best counterpoints I’ve seen:

‘[Although] I certainly believe that any member of our highly digital society should be familiar with how these [software] platforms work, universal code literacy won’t solve our employment crisis any more than the universal ability to read and write would result in a full-employment economy of book publishing.’ (SOURCE)

OUCH! The article goes on describe how lower-skilled computer programming jobs are being outsourced to India, leaving a pool of higher-skilled jobs here in the U.S., which will get more cutthroat as time passes.

I’ll add the following:

  1. Computer coding is a dry, difficult job that few people are suited for. It requires tremendous patience, good math skills, a willingness to work brutal hours to get promoted, and it provides few (if any) opportunities for self-expression or emotionally interacting with clients. You sit in a cubicle looking at numbers and letters on a computer screen, tediously typing away and testing your software program over and over to work out the kinks. The notion that America can expand its white-collar workforce by incentivizing more people to become computer programmers rests on the flawed assumption that human beings are perfectly interchangeable widgets lacking innate strengths, weaknesses and preferences that together limit their job options.  The vast majority of people just aren’t cut out to spend eight hours a day poring over computer code.
  2. Wages will decrease if labor supply increases. As with any other profession, computer programmer salaries are determined by supply and demand. If the STEM Shortage Chicken Littles get their way and the number of American computer programmers sharply increases, then median wages will decrease unless there’s an equivalent rise in demand for their services. Lower pay will make an already dull and difficult job not worth it for many coders, and people will start fleeing for other jobs, counterbalancing the inflow of new coders.

If we do think that there’s a shortage of computer programmers in America, then there’s a fair case to be made that the best way to fix it is to focus on retaining existing talent rather than trying to attract new entrants to the field. Complaints about age discrimination against older workers, low pay, and overly demanding work schedules seem pervasive if the news articles out of Silicon Valley are to be believed, and are supported by high rates of turnover in computer programming companies.

Links

  1. https://qz.com/987170/coding-is-not-fun-its-technically-and-ethically-complex/
  2. http://blogs.harvard.edu/philg/2017/07/09/teaching-young-americans-to-be-code-monkeys/
  3. https://www.fastcompany.com/3058251/why-learning-to-code-wont-save-your-job
  4. https://techcrunch.com/2013/05/05/there-is-in-fact-a-tech-talent-shortage-and-there-always-will-be/
  5. http://www.npr.org/sections/ed/2015/09/18/441122285/learning-to-code-in-preschool
  6. https://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm#tab-8

No carrier upgrade for you!

Yet another Russian military BIG PLAN that was announced with trumpets has died quietly.
The “Admiral Kuznetsov”

Russia’s single, outdated, and ailing aircraft carrier the Admiral Kuznetsov will spend the next 2-3 years just getting repaired, presumably from wear and tear incurred during its recent deployment off Syria and also probably to fix a backlog of known problems that existed long before the ship even left port. Russia’s plans to use the downtime to also upgrade the carrier have been canceled due to lack of money.

The Admiral Kuznetsov had a less-than-distinguished performance in 2016 operating in the eastern Mediterranean against ISIS: Two of the carrier’s fighter planes crashed while trying to land on it. After the first accident, most of the ship’s aircraft transferred to Syrian government ground bases and operated from there.

For comparison, China now has two aircraft carriers, one of which is about equal to the one the Russians have, and the other of which is better. The Chinese will start building a third in a few years.

The U.S. has 11 supercarriers, which individually are several times better than any of the carriers China or Russia has. The U.S. also has eight smaller carriers called “Amphibious Assault Ships.”

Russia is possibly the world’s worst offender when it comes to making overly ambitious predictions about future improvements to its military, technology, economy, or infrastructure (and, unsurprisingly, about negative things that will happen to its competitors like the United States). I think this owes partly to a unique cultural habit of lying (the “vranyo“), which is accepted and readily seen through by Russians, but misunderstood by foreigners.

Links

  1. http://nationalinterest.org/blog/the-buzz/russias-only-aircraft-carrier-has-big-problem-21535
  2. https://www.upi.com/Top_News/World-News/2016/12/05/Second-Russian-fighter-jet-crashes-attempting-aircraft-carrier-landing/4261480940373/
  3. http://www.janes.com/article/65775/russian-carrier-jets-flying-from-syria-not-kuznetsov
  4. http://www.nytimes.com/2011/10/23/magazine/from-russia-with-lies.html