Media’s misrepresentation of artificial intelligence could derail promising technology7th August 2018
In June of , five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.
A month after this initial research was released, Fast Company published an article entitled “AI Is Inventing Language Humans Can’t Understand. Should We Stop It?” The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.
Fast Company’s story went viral and spread across the internet, prompting a slew of content-hungry publications to further promote this new Frankenstein-esque narrative: “Facebook engineers panic, pull plug on AI after bots develop their own language,” one website reported.
Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap.”
A growing number of researchers working in the field share Lipton’s frustration, and worry that the inaccurate and speculative stories about AI, like the Facebook story, will create unrealistic expectations for the field, which could ultimately threaten future progress and the responsible application of new technologies.
courtesy : GLP