Sixty-two years ago this summer, Dartmouth professor John McCarthy coined the term artificial intelligence. Joi Ito, director of MIT’s Media Lab, has come to think it’s unhelpful.
Talk of AI has become hard to avoid due to surging investment from companies hoping to profit from advances in machine learning. Ito believes the term has also become tainted by the assumption that humans and machines must be in opposition—think debates about jobs stolen by robots, or superintelligence threatening humanity.
“Instead of thinking about AI as separate or adversarial to humans, it’s more helpful and accurate to think about machines augmenting our collective intelligence and society,” Ito says. (Ito is a regular contributor to WIRED’s Ideas section.) Say goodbye to AI, and hello to EI, or XI, for extended intelligence. The phrase is supposed to make it easier to think of AI as a tool for the good of the many, not the enrichment or protection of the few.
Ito isn’t alone in pushing the notion of extended intelligence. The torch is carried by a new group called the Global Council on Extended Intelligence, announced Friday by the Media Lab and the IEEE standards organization. CXI, as the project is also known, aims to steer more of the talent and money being spent on AI towards projects aimed at improving the lot of everyone. Areas of interest include helping people control their identity even as technologies such as facial recognition become more widely used, and finding ways to measure how automation impacts the well-being of workers, not just company profits and GDP.
CXI is already working on policy guidance for governments on those topics. The group’s members include representatives of the European Union, the UK’s House of Lords, and the governments of India and Taiwan.
This is far from the first project concerned with the societal consequences of AI. Many academic and corporate researchers now investigate how to keep algorithms ethical, in part motivated by certain algorithms being found to be biased against women or black people. Some companies, including Google and Microsoft, have set up internal ethics processes or guidelines to put guardrails around their use of the technology.
Google’s guidelines were released earlier this month after employees protested the company’s involvement in a Pentagon AI project, saying they didn’t want Google’s machine learning prowess to be involved in killing people. Konstantinos Karachalios, managing director for IEEE’s standards efforts, says CXI is positioned to assist a broader movement in which technologists are questioning if technological development should be guided by pursuit of profit and power alone. “The time of innocence is over, and technical professionals are waking up,” he says. “We should support those people.”
More Great WIRED Stories
Source: A Plea for AI That Serves Humanity Instead of Replacing It | WIRED