Overcoming the Fear of Artificial Intelligence

Ray Kurzweil, a director of engineering at Google, founder of the Singularity University, and a renowned author, famous for his optimistic outlooks on future, recently wrote a piece for Time Magazine addressing concerns expressed by Stephen Hawking and Elon Musk towards the developments of Artificial Intelligence (AI) and its potency to become an existential threat to humanity.

Seems like these days AI is finally getting its long-deserved attention of a wider audience. When it comes to technology, we have always had this love-hate relationship with it. The world of tomorrow can be either black or white, and humanity can end up either as being a master or a servant of its own creation. According to Kurzweil, AI might surpass human level of intelligence by 2029. We can no longer bury our heads in the sand and comfort ourselves with distant and unforeseeable future. Even without the help of super-intelligence, we can figure out that putting a problem aside won’t contribute to its resolution. It is what happened with global warming, extinction of species, and disappearance of rain forests and other animal natural habitats. Now it cannot be stopped. We can only try to fix some of it, just what we imagine our children will have to do with a rogue technology. A band of brothers will launch a suicide mission to save the dystopian society from its demise, only to start all over again from the scratches. Does it have to come to that for us to see where we were heading all the way?

Rarely do we see Sci-Fi movies featuring a utopian society of the future where technology endowed by reason has exclusively benevolent tendencies towards humankind. Interstellar is one of those examples where we can see how superior intelligence can have an encrypted moral code that can safeguard us even from ourselves. We are not such a benign race as we like to imagine ourselves. Our history is full of unnecessary violence, dishonesty, human and animal suffering. Maybe we need some fixing after all? We don’t have to be punished by hell for our sins; we might just need to be reprogrammed, however scary that might sound, and not by someone or something else, but by ourselves, by our own design. AI is already helping us in that way and it can most certainly help us more, if we can build a relationship based on friendship and mutual dependence. Yes, friendship, because it’s the ethical code of humanity that we need to instill into those AI machines, if we were to hope for a brighter common future.

As pointed out by many, we are already a human-machine civilization, and we wouldn’t be able to sustain our civilization without technology. We have seen that any Luddite approach doesn’t work, and can only have adverse effects. Now is a time to discuss the future orientation of our technological advancements, and ethical standards on which it will be based. The only recipe for that is given by the same guy who warns us against AI, the great Stephen Hawking, who summed up the whole issue in a couple of lines, “Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.” (citation taken from Wikipedia).