A while back I was walking through the parking lot at the Derby Street shops in Hingham and I saw my first, real-life Tesla. My timing was impeccable, as I gawked at the car a very pleasant older woman approached - it was her car - and we started a conversation. She gave me a complete tour of the car, what was great about it, why she bought it, the works. Toward the end of our conversation I mentioned that I thought Elon Musk was the successor to the recently departed Steve Jobs - a brilliant mind that married technology and design and appeal and might just change the world. I'll never forget what she said, "Oh, I think Musk is going to make Jobs look like a piker." (a piker is a gambler who only places small bets or someone who does things in a small way) While I didn't necessarily agree, I loved the way she phrased it - and I've been watching Elon Musk more carefully since.
So - what does this have to do with big data? I'm getting there. Clearly, Musk is not a luddite. He's involved in solar energy, electric cars, reusable rockets, and plans to bring humans to Mars to name a few. When Vanity Fair interviews Musk and he warns,“I don’t think anyone realizes how quickly artificial intelligence is advancing, particularly if [the machine is] involved in recursive self-improvement … and its utility function is something that’s detrimental to humanity, then it will have a very bad effect.”
He continued with the more dire warning: “If its [function] is just something like getting rid of email spam and it determines the best way of getting rid of spam is getting rid of humans …”
I know it's easy to dismiss this as the ravings of a lunatic - but he's not, not really. He's a visionary - a man who embraces the future, who is leading us there. But he's also a realist - and he realizes that poorly written sentient code could be our undoing. In a followup article with Computerworld, Andrew Moore - Dean of Computer Science at Carnegie Mellon offered, "At first I was surprised and then I thought, 'this is not completely crazy,' I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."
So now you think either a) I'm a loon or b) what does this have to do with Google? Let us, for a moment, assume I'm not a loon. When you consider what it would take for sentient machines to cause problems it doesn't seem that intimidating. You could just pull the plug, right? Withhold energy, and no matter how malicious they were, they would just cease. Some little factory in Arizona becomes self-aware - we just cut the power, cut the internet, and wait for the problem to resolve itself. Here's where it gets interesting. What if it was one of the world's largest global resources that began to learn?
[duh duh daaaahhh]
Here's where Google comes in (finally). Last year Google acquired Deepmind Technologies, a London-based artificial intelligence firm. A recent BetaBeat article suggest that Google is interested in using this investment to have computers begin to program themselves. Which brings us back to the title. Say it's not a factory in Arizona that becomes self-aware. Instead, consider it's the Google network, or all of IBM's networks and servers taken over by a malicious Watson of the future. If these massive big-data projects begin to think and learn and adapt - are we letting the genie out of the bottle?
No comments:
Post a Comment