What the artificial intelligence debate needs is a healthy dose of realism
Headlined by household names like Professor Stephen Hawking and sparked by authorities ranging from Homeland Security and Science Today, most of the attention in the AI debate is trained on the potential threat of artificially replicating human behavior in machines. Can robots make ethical decisions? Can machines sense the world around them? Can driverless cars ever be safe? How can we stop mankind being overrun by killer androids?
Much of the discussion going on in the AI/machine learning world today suggests that, like the Sun blinking out of existence, Terminator-type scenarios are a worry we can put on hold for a few years. As Gary Marcus of New York University noted at the recent O’Reilly AI conference, AI is a challenge that we have yet to master. “We wanted Rosie the robot, instead we got the Roomba.”
Yann LeCun, professor at New York University and Director of AI Research for Facebook, told the conference that, for machines to act in an intelligent way, they need a very, very large amount of background knowledge in order to predict and act on possible outcomes. This understanding is what has driven Argyle Data’s data lake strategy, an essential part of its machine learning analytics application. In attempting to get machines to predict in situations of uncertainty, LeCun said that the crux of the problem is to move machines from “supervised learning” (where categorized information is input into a machine so it can learn as it processes and analyzes) to “unsupervised learning” (learning from unlabeled data). For Argyle Data, this is an important endorsement of our machine learning approach, which uses both supervised and unsupervised learning to detect anomalies in mobile communications traffic. (For further details, see the executive preview of our joint research paper with Carnegie Mellon University.)
Forbes Magazine reported that, also at the O’Reilly conference, Oren Etzioni of the Allen Institute for Artificial Intelligence was driven by a faulty slide projector to muttering, “How can we figure out AI if we can’t figure out AV?”
Padraig Stapleton, VP of Engineering for Argyle Data, made a similar point at last Friday’s Telecom Council AI Workshop in Silicon Valley – if we haven’t yet figured out the human brain, how can we possibly replicate it in machine form? Meanwhile, he said, there are practical applications for the machine learning/AI advances that we have made.
VentureBeat recently published a very interesting article about the way we already use AI, citing the seemingly intuitive systems used by Amazon, Netflix and others to suggest new items that are must-have buys. The article gives the award for the most unappreciated form of AI to banking fraud alerts, so on the one hand AI helps us empty our bank accounts and on the other it makes sure there are funds in the accounts to spend on AI-suggested purchases.
I don’t agree with the author, though, when he states that AI knows us better than we know ourselves. AI does what we require it to do (unlike AV systems). As it stands today, AI/machine learning depends on human ingenuity and human-created algorithms to train and shape the way it responds to data. It’s just that, rather like a car, plane or rocket, it can perform its massive task far faster than any human. And certainly, when it comes to mobile fraud, that’s a very good thing indeed.