AI is coming: How to deal with this new type of intelligence
Sure, the humanichs on CBS’ Extant are staving off an attack by alien spores turned human hybrids. But I have my doubts about how that is going to work out. Intelligent robots were plenty helpful before they turned on us and the Fresh Prince of Bel-Air saved us – barely – from domination. Ava of Ex Machina ran amuk. (But really. In her place, who wouldn’t.)
Hal was a pain; Skynet nothing but trouble. And the synths on AMC’s Humans. Don’t get me started. (Oh, the humanity.)
Back in the real world, a four-legged robot is opening a door in an engineering lab and a two-legged robot named Atlas is jogging – seriously, jogging – in the woods.
Now, I don’t want to be an alarmist…but soon AI will be pouring over our vital signs. Yours. Mine. Pretty much those of anyone who lets it. The sensors are in mass production. Wearables from fitbit, Garmin, Jawbone measure heart rate, even blood pressure. Google is developing a contact lens that measures blood sugar.
IBM’s Watson is on track to crunch the big data from these and other life sign monitors. Truth be told, I’m not that worried. I’ve been through this before.
Thirty-six years ago I had my first encounter with artificial intelligence. I was visiting Stanford University, home of SUMEX-AIM (Stanford University Medical EXperimental computer for Artificial Intelligence in Medicine), rubbing elbows with the AI elite – Joshua Lederberg, Edward Feigenbaum, Edward Shortliffe.
Back then we believed the singularity was just around the corner. Of course, we didn’t call it that. It was just AI, the logical extension of computing. But, as it turned out, AI was – and is – a lot more than that in ways we are only beginning to understand. Here’s a new one, well, relatively new.
AI’s success is going to take more than digital tinkering. And keeping it benign certainly is going to take more than all of humanity locking arms against it and singing Kumbaya. The future of AI will be determined to a large extent by our ability to nurture a positive working relationship with this new type of intelligence. And that won’t be easy.
Flesh-and-blood doctors don’t much like computerized diagnosticians. I learned that early on, writing about SUMEX-AIM. That fact also did not escape the early developers of computer-aided medical technologies. When these entered the medical mainstream shortly after the turn of the 21st century, their developers spun them so they’d be palatable to people. They turned computer-aided diagnosis into computer-aided detection. Same acronym. Hugely different meaning.
CAD’s big break came as an adjunct to digital mammography. It was to this branch of women’s health what the spellchecker is to writing. From the outset, CAD software was highly sensitive, but notoriously nonspecific. It would identify just about every possible lesion in an image. This was very annoying to the mammographer who had to go back and essentially re-interpret the image. Yet, mammographers embraced CAD as an aide.
To be sure, CAD has gotten better. But it still has a ways to go. One of the limiters may be the lack of something only people can provide – trust. “Suboptimal performance of the human–automation team is often caused by an inappropriate level of trust in the automation,” opines one researcher who is looking into ways to make CAD more effective. “(Physicians) sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD.”
Given what Watson might be able to achieve through IBM’s proposed acquisition of Merge Healthcare, healthcare might be in for a big boost. But it’s only going to happen, if people understand what machines can – and cannot – do.
Trust and teamwork sound like strange goals when talking about the relationship between people and machines. But meeting those goals could be critically important. It’s good to be wary. Look no further than Commander Bowman (2001: A Space Odyssey) locked outside the pod bay door in Jupiter orbit. But, if and when machines become intelligent, we’re going to have to assess their capabilities and treat them accordingly.
It may take an attitude adjustment on our part, whereby we don’t look at machine intelligence so much as artificial as assistive.
Seven years ago a mechanical engineer hinted at exactly that in an IEEE abstract, describing “the development of intelligent task-driven socially assistive robots.”
Today there’s a forum entitled “Assistive Intelligence And Technology.”
A few days ago a story in Fast Company appeared under the title “Don’t call it AI: Put away your fears of artificial intelligence. Assistive intelligence is the future.”
And so it has begun, swapping terms for AI, as we did for CAD. But, as in CAD, a word swap won’t be enough.
We must be ready to view intelligent machines as “teammates.” Subordinate ones, of course. Limited in their ability. Beholding to us for having created them. But…not so obviously that we hurt their feelings.
Let’s not be stupid about it.