Most of the world’s news agencies shared Stephen Hawking‘s recent warning, from an interview with the BBC, about the threat that Artificial Intelligence(AI) poses to humanity. As it may sound exaggerated at this stage of humankind’s technological evolution, we cannot but stop and ask ourselves if such a warning is legitimate, coming from one of the most brilliant minds on this planet.

It doesn’t take an expert to conclude that artificial intelligence in its current form does not justify such fears and concerns. But, this is not the first time that some people who are the avant-garde of scientific and technological progress presents us with probable doomsday scenarios in lines with our ab(use) of technology . Hawking stressed out that once artificial intelligence supersedes ours, we’ll no longer be able to control it and it will, “take off on its own.” At this point, AI will be capable to augment itself to the point where we will no longer be able to make such a complex decisions necessary to govern our world, so we’ll be at the mercy of machines to run it for us, presuming that new technologies will be incorporated in everything we depend on: financial systems, industry, energy supplies, and so on. Additionally, robots with AI will be able to self-replicate, so they won’t need us any more.

One of the most comprehensive overviews on this topic is most certainly Bill Joy’s (co-founder and Chief Scientist at Sun Microsystems) article Why The Future Doesn’t Need Us that appeared as a cover story in Wired in 2000. Since then we have not done much to widen this debate, nor did we take any of these early warnings seriously. As Joy writes, most of the people from the scientific community and technological élite are well aware of the potency of our undertakings in the domain of robotics, genetics, engineering and nanotechnologies and possible dystopian scenarios, such as those described in the Ted Kaczynski’s Unabomber Manifesto, but whenever he would approach them and initiate a conversation on the subject, they would react in the way as it is a beating of a dead horse. Why have we become so indifferent to the most significant issue of the humankind: our future? Do we all agree that we are already doomed and that all resistance is futile?

However complex some ideas and concepts of GRAIN (Genetics, Robotics, Artificial Intelligence and Nanotechnology) are, if presented in a plastic enough manner, most of the population would understand them, and after some time through discussion, would be able to rationally judge for themselves their pros and cons. As for now, just a segregate body of those directly involved in the research and development, along with a couple of enthusiasts, is able to comprehend potential outcomes of uncontrollable commercialism that dominates GRAIN’s world.

Public debate, preceded by a serious information campaign, could elevate the consciousness about the state of affairs and breed some constructive solutions, before it is too late, before our benefits from new technologies become our gray goo problem. It has already been done with the environmental issues. We didn’t do as much as we should about the issue, but at least some action was undertaken that will amount to certain results. GRAIN technologies can offer us bright future, unimaginable human capabilities. We can become real masters of the universe, but only if we don’t become the Borg before that.

LEAVE A REPLY

Please enter your comment!
Please enter your name here