Artificial intelligence, the next disruptor

I was on a "next five years" future panel in the mobile/tablet track at last weekend's Society of News Design conference in St. Louis. Here's an expansion of what I had to say:

When you're thinking about the future, there are two sides to consider. On one side, you have the William Gibson model: the future is already here, it's just unevenly distributed. With that model, it's easy to see: faster, cheaper, flatter, lighter, more connected, easier to use. All the pieces of that evolution are here today. It's just a matter of connecting them and thinking about their impact. That impact may be staggering, but it's understandable and therefore not magic.

On the other side, you have the Black Swan model: something completely unanticipated, totally outside the realm of our current reality, will change everything. We won't see it coming. Afterward, it may seem perfectly normal and maybe even obvious.

While I don't know what our Black Swans will look like, I think I know where they'll come from: artificial intelligence. Specifically the branch of AI known as machine learning.

AI has been central to science fiction for generations, and it's often a scary tale. Filmmakers Jean-Luc Godard's Alphaville (1965) and Stanley Kubrick's 2001 (1968) showed us chilling visions of computer dominance long ago. But AI has turned out to be a tougher nut to crack than was imagined in those days.

A branch of AI research called machine learning has been quietly laying the groundwork for what I think will be revolutionary over the next few years, even if I don't understand what it will do, exactly.

This is a simplification, but in general, machine learning works like this: You start with a whole lot of data. You program the computer to look for statistically significant relationships. Then you have the computer use those relationships to predict outcomes -- essentially to solve problems. Finally comes the important step: you let the computer make mistakes and provide feedback to correct those mistakes. This is somewhat like the way a child learns: observation, guesses, errors and corrections. Red things that glow tend to be hot, so I won't stick my finger there any more.

As the computer gets more data to compare, and more corrections to consider, it gets smarter, far smarter than its initial programming. This is how IBM's Watson supercomputer got smart enough to play Jeopardy.

Without necessarily knowing it, we're all using AI every day, and teaching computers to get smarter. Remember when Google Translate was new? We used to have fun translating a paragraph to another language, then translating it back, and laughing at the crazy result. But somewhere along the line it got really good. How? By processing immense quantities of data and being corrected by its users. Google Search works the same way (one reason they're recording clickthroughs is to get feedback on search results). So does Google's voice recognition.

In this AI arms race, the big winners are not necessarily the smartest computer scientists, but rather the entities with the most data to crunch and the most users whose usage patterns and feedback become part of the mathematically driven outcomes.

Google, Amazon, and Facebook. We know about Google. Amazon's new Silk browser (for the Kindle Fire tablet) will proxy all your Web interactions, learn about people in general and you in particular, and use that knowledge to predict and anticipate your wants and your actions. Facebook's personalized "News Feed" stream is driven by AI processes that consider not only your social graph but how you interact with people and topics. Despite Facebook's rather silly denials, they're clearly gathering massive intelligence about us through those "Like" buttons on other websites, including most news sites. Imagine the mountain of data being fed to their machine-learning algorithms.

I can hope that whatever machine intelligence rises out of this globally networked data pile is beneficent, something that helps us all make sense of things, keep track of our loose ends, understand what's important, and determine our civic future in a more sane and rational way than has been the case.

Or, if you're more of the dystopian persuasion, you could watch this:

http://youtu.be/l-MpTvo9yU0

Comments

Nice piece. I think we are on the threshold of AI's Cambrian explosion. Big spying systems like Google and Facebook are getting out of hand, however. Soon, the public will get fed up and demand closed networks for private, restricted, non-commercial use. The "Occupy Wall Street" movement may morph into "Occupy the Internet" and give birth to the internet within the internet. Having said that, do not discount more conventional AI, the kind that will create smart robots that can do anything, including taking care of the aging baby boomers. This is one scary black swan that threatens to change everything, including our economic and social systems. The word 'upheaval' does not do it justice.

Looks like this article was written the same day that Apple announced the introduction of Siri for the iPhone, so it wasn't able to take that into account. Having AI like Siri on the the leading smartphone platform certainly adds credence to the predictions made here.

I didn't mention Apple because while they do get UI, I don't think they really get AI.

Siri is an voice-recognition (UI) acquisition whose AI backend is powered by Wolfram -- which does understand AI. The smart part lives in the cloud, not the device. So while Apple doesn't get it, Apple can afford to outsource for it.

There's a really good explanation of Siri's history here:

http://www.quora.com/Siri-product/Why-is-Siri-important