The Three Laws of Transhumanism and Artificial Intelligence
Wikimedia Commons I recently gave a speech at the Artificial Intelligence and The Singularity Conference in Oakland, California. There was a great lineup of speakers, including AI experts Peter Voss and Monica Anderson, New York University professor Gary Marcus, sci-fi writer Nicole Sallak Anderson, and futurist Scott Jackisch. All of us are interested in how the creation of artificial intelligence will impact the world. My speech topic was: The Morality of an Artificial Intelligence Will be Different from our Human Morality Recently, entrepreneur Elon Musk made major news when he warned on Twitter that AI could be: "Potentially more dangerous than nukes." The coming of artificial intelligence will likely be the most significant event in the history of the human species. Naturally, as a transhumanist, I strive to be an optimist. But is it even possible to program such concepts into a machine? Wikimedia Commons I don't think so, at least not over the long run. Let's face it.