Over the last year, it seemed like Artificial Intelligence was on quite a run and that days of complacently serving our robot overlords in a kind of de-natured paradise were right around the corner. Self-driving cars were in the advanced stages of development and not by the tweedy academics but by giants of industry and not the old industry, either. These happy engineers were the new guys who had already changed our lives in ways both big and small through the triplet miracles of design, technology and marketing. The Internet of Things had its own acronym (IoT) in the mass media and my house was getting smarter by the nano-second. Oh swoony singularity.
Well, maybe not so much. Perhaps the first hint was a little blurb that a Google car got a ticket. Seems like it was driving too slow for conditions and causing a bit of traffic jam. Not a big deal. Perhaps just a case of the new ethics exposing the depravity of the old.
The same Google brand car followed up by trying to hip check a bus out of its lane, an action that revealed several apparent issues for AI in a blended robot/human world. O.k. it’s widely accepted that the bus driver probably wasn’t hewing exactly to the rules of fine road etiquette. Well duh. Seem’s like AI got the artificial right on that one, but not so much the intelligence.
And what the hell was that car thinking? Let’s see here. A bus. Big. Multiple tons big. Loud. Probably Stinky. How’d the car miss it? Or miss-interpret its trajectory. Did the happy little auto-car think it’s wonderfulness would judo chop the big old bus into submission, somehow override the laws of physics? The on-board engineer said, somewhat lamely, that he probably would have made the same choice. Wasn’t better choices the whole point of this exercise?
Google is the source of another AI cautionary tale as well with the Revolv M1 smart home hub. Eighteen months ago, us early adopters were rushing out to plunk down $300+ dollars for this smart device that was going to link up all the other smart devices in our homes. The promise was to bake the tofurky to a perfect degree of crisp while we were away, turn on the lights when we pulled into the driveway, and maybe even bring our slippers as we stepped through the door.
Eighteen months later, the tofurkey is a gelatinous lump, the house is dark, and the dog is chewing on the slippers. Turns out Google had a change of heart and without much of an alternative simply bricked the devices remotely. No “May I” or “Please.” Just a “Wasn’t that fun? We’re done. Have a nice life.” Turns out the IoT isn’t only about expanding our control of our devices, but other’s control of them as well. Not quite the rosy picture in the Amazon marketing text for the Revolv M1 that is still posted, albeit with a notice that they’re not sure when they’ll be able to ship the item (something like “never” apparently).
And then there’s the sad, sad AI story of Tay. The guys over at Microsoft, somehow channeling their 80’s “let the market do our software testing” engineering strategy, turned an AI based chat bot loose on the open internet via Twitter. It was going to be a joyful interaction full of sunshine and lollipops as the AI driven persona learned from its interactions with real people over the internet to achieve a state of, and I’m quoting her Twitter page here, “zero chill.”
There were only a few small problems with this set up. First off, it’s been a long long time since anybody anywhere was naïve enough to imagine the inter-webs is a place of all sunshine and lollipops. Second, the engineering objective of “zero chill” seemed a bit, how to put this delicately, middle-aged white guy out of touch. A quick trip over to the urban dictionary would have revealed that having no chill is not a good thing. It’s a kind of descent into self-destructive irrationality.
Tay lasted less than 24 hours on the open internet, rapidly learning to be a hate mongering neo-Nazi, spewing the kind talk that gets you suspended from high school or running for president depending on how bad your hair-do is. Microsoft had to fire up the digital equivalent of a tranquilizer gun and pull the plug. The lame “I probably would have done the same thing” excuse wasn’t going to work here.
It’s still not entirely clear what happened. Microsoft’s apology and explanation (good for them for starting with an apology) suggested some kind of planned attack exploiting un-named vulnerabilities that had not been planned for. Best case scenario is the internet’s twisted sense of humor was just more than Tay could understand. Worst case scenario is that Tay’s internet journey took her/it into the darkest xenophobic reaches of the human soul.
And that is the challenge we face as we try to build our better angels. They are our own creations and carry our psychological, emotional and intellectual DNA for better or worse. It shouldn’t come as a surprise that as AI tackles situations of greater complexity, the outcomes become less predictable, less manageable, and less comfortable. That’s the nature of our relationship with complexity. Hell that’s what complexity is.
Our dear digital boxes, the brains at the heart of all this explosion of smart technology, are still digital. Zeros and ones. Yes we’ve gotten exponentially better at programming in capabilities for model building and feeding it with oceans of data. But guess what. Humans over the last four millenia have gotten exponentially better at all that as well. And we’re still looking for help from our better angels, whatever they may be.
And we still, at 15 years old, don’t have the guts to pick up the phone and call that girl or boy. We still can’t seem to work out how to share the 36.48 billion acres of land this earth has. Somehow 5+ acres per person just isn’t enough and we frequently resort to “Zero-chill” violence in our frustrations. Zeros and ones don’t make solving those really complex problems easier. They just make replicating and amplifying our assumptions about those complexities easier which is not the same thing.
Bottom line? As things stand, AI is a useful way to validate the fidelity of our assumptions, to play them out into the future. Yes, AI has lept out of the University labs and into the cube farms of major companies everywhere. Yes, we’re faced with the wonder and deficiencies of AI every day now. However, that kind of experimentation is still a long way from a comprehensive model for the daily tasks of living life well.
Or perhaps the experiment is the life well lived, with conclusions only passed on to the next generation for further refinement. So experiment on Google, Microsoft, and Apple. Experiment on and we’ll come along for the ride. Obviously after 4000 years of trying, we still need all the help we can get.