Experts around the world are signing an open letter (here) which addresses the future of artificial intelligence. It is titled Research Priorities for Robust and Beneficial Artificial Intelligence. The letter has already been signed by hundreds of scientists and technologist, including MIT, Harvard, Cambridge, Oxford, and Stanford professors, researchers behind projects such as Deep Mind and IBM’s Watson, experts from Microsoft Research and Vicarious…

The Future of Life Institute (FLI), a non-profit organization which assembles a number of VIPs from the scientific community and other walks of life, with a mission to ‘catalyze research and initiatives for safeguarding life and developing optimistic visions of the future…”, launched a campaign for signing an open letter that draws attention to the fact that “it is important to research how to reap its (AI’s) benefits while avoiding potential pitfalls”. World needs to coordinate the development of super-smart machines in order to ensure their benevolent contribution to humanity’s future. Attached twelve-pages-long document contains guidelines for a safe evolution of this technology.

There’s been much fuss lately about the potential threat that AI could pose to humanity, once computers become smarter than humans. The new trend is best depicted in movies such as Transcendence and the statements of Stephen Hawking, Elon Musk, and Bill Gates. Their concerns may be summed up in the words of Nick Bostrom, the founder and director of the Future of Humanity Institute who said, ”When world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.” In its unquenched thirst for something as pointless as paper clips, superintelligence may transform the whole world to meet that goal. Contrary to popular opinion which is reflected in movies, where an evil organization uses artificial intelligence to conquer the world, Bostrom believes it might come simply as a progression of the AI’s lack of common sense.

As could be seen in Transcendence, Hawking speculated in an article he co-wrote for The Independent several months ago, that, “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.”
Having all this in mind, we have to tread lightly on the path which leads to the development of something that can easily get out of our control and pursue its own goals.

Is it just our imagination at work, or such a threat is really existential, the time will show. If you wish to sign the letter in question, you can do so at http://futureoflife.org/misc/open_letter.