background preloader

The Three Laws of Transhumanism and Artificial Intelligence

The Three Laws of Transhumanism and Artificial Intelligence
Wikimedia Commons I recently gave a speech at the Artificial Intelligence and The Singularity Conference in Oakland, California. There was a great lineup of speakers, including AI experts Peter Voss and Monica Anderson, New York University professor Gary Marcus, sci-fi writer Nicole Sallak Anderson, and futurist Scott Jackisch. All of us are interested in how the creation of artificial intelligence will impact the world. My speech topic was: The Morality of an Artificial Intelligence Will be Different from our Human Morality Recently, entrepreneur Elon Musk made major news when he warned on Twitter that AI could be: "Potentially more dangerous than nukes." The coming of artificial intelligence will likely be the most significant event in the history of the human species. Naturally, as a transhumanist, I strive to be an optimist. But is it even possible to program such concepts into a machine? Wikimedia Commons I don't think so, at least not over the long run. Let's face it.

Related:  augmented humans - the singularityScienceRobots et IASuperintelligence

Warning: Cybotopia Ahead by 2030 A cybotopia would be a world in which cyborgs and AI machines and systems dominate and rule over “unenhanced” humans, turning human beings as we know them today into a sub-species, or lower-order being. A dystopia, transliterated from Greek roots of this word, is a “not good place”. It’s a society which is profoundly dysfunctional by the standards of human civilization. When mathematical genius John von Neumann wrote his prescient final work The Computer and the Brain in 1956, he laid the theoretical framework for one day connecting the neural system of the brain directly to the digital networks of computers. He described how the brain and the computer are powerful computing machines, or automata, which share many characteristics and processes. The analogies he saw between the brain and computer opened up the possibility of one day forging a deeper working relationship between them.

What will happen when the internet of things becomes artificially intelligent? When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention. All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could “spell the end of the human race”. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our “biggest existential threat” and said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the “concerned” camp when it comes to machines becoming too intelligent for us humans to control.

Collaborative learning for robots (Credit: 20th Century Fox) Researchers from MIT’s Laboratory for Information and Decision Systems have developed an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.

Albert Einstein Albert Einstein (/ˈaɪnstaɪn/;[2] German: [ˈalbɛɐ̯t ˈaɪnʃtaɪn]; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist. He developed the general theory of relativity, one of the two pillars of modern physics (alongside quantum mechanics).[1][3]:274 Einstein's work is also known for its influence on the philosophy of science.[4][5] Einstein is best known in popular culture for his mass–energy equivalence formula E = mc2 (which has been dubbed "the world's most famous equation").[6] He received the 1921 Nobel Prize in Physics for his "services to theoretical physics", in particular his discovery of the law of the photoelectric effect, a pivotal step in the evolution of quantum theory.[7] Near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field.

#24: Cosmic Beings: Transhumanist Deism in Ted Chu’s Cosmic View In Human Purpose and Transhuman Potential: A Cosmic Vision for Our Future Evolution, IEET affiliate scholar Ted Chu, a professor of Economics at New York University in Abu Dhabi and former chief economist for General Motors and the Abu Dhabi Investment Authority, argues that post-humanity is a logical and necessary evolutionary next step for humanity, and we need a new, heroic cosmic faith for the post-human era. “The ultimate meaning of our lives rests not in our personal happiness but in our contribution to cosmic evolution,” says Chu… According to IEET readers, what were the most stimulating stories of 2014? This month we’re answering that question by posting a countdown of the top 31 articles published this year on our blog (out of more than 1,000), based on how many total hits each one received. The following piece was first published here on Feb 12, 2014, and is the #24 most viewed of the year.

Darpa sets out to make computers that teach themselves The Pentagon's blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves -- while making it easier for ordinary schlubs like us to build them, too. When Darpa talks about artificial intelligence, it's not talking about modelling computers after the human brain. That path fell out of favour among computer scientists years ago as a means of creating artificial intelligence; we'd have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms -- "probabilistic programming" -- to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.

Artificial Superintelligence: A Futuristic Approach Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy Indiegogo fundraiser for Roman V. Einstein's Philosophy of Science 1. Introduction: Was Einstein an Epistemological “Opportunist”? Late in 1944, Albert Einstein received a letter from Robert Thornton, a young African-American philosopher of science who had just finished his Ph.D. under Herbert Feigl at Minnesota and was beginning a new job teaching physics at the University of Puerto Rico, Mayaguez.