background preloader

Artificial intelligence / intelligence artificielle

Facebook Twitter

Google chairman Eric Schmidt says robots can tackle overpopulation and climate change. Artificial intelligence will let scientists solve some of the world's 'hard problems.' This is according to Google chairman, Eric Schmidt, who claims that super-intelligent robots will someday help use solve problems such as population growth and climate change. During a talk in New York, he said improvements in AI will help scientists better understand the cause and effect of these challenges to come up with sensible solutions. 'AI will play this role to navigate through this and help us,' he said, adding that he would like to see personalised AI systems help people in every day life.

Artificial intelligence will let scientists solve some of the world's 'hard problems.' He added that the field of AI was becoming so important that companies need to collaborate to develop standardised approaches, according to Bloomberg. 'Every single advance has occurred because smart people got in a room and eventually they standardised approaches' said Schmidt. 'We are building tools that humans control. Google, Facebook, and Microsoft Team Up to Keep AI From Getting Out of Hand. Let’s face it: artificial intelligence is scary. After decades of dystopian science fiction novels and movies where sentient machines end up turning on humanity, we can’t help but worry as real world AI continues to improve at such a rapid rate.

Sure, that danger is probably decades away if it’s even a real danger at all. But there are many more immediate concerns. Will automated robots cost us jobs? The good news is that many of the tech giants behind the new wave of AI are well aware that it scares people—and that these fears must be addressed. “Every new technology brings transformation, and transformation sometimes also causes fear in people who don’t understand the transformation,” Facebook’s director of AI Yann LeCun said this morning during a press briefing dedicated to the new project.

According to LeCun, the group will operate in three fundamental ways. Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. Go Back to Top. There is a blind spot in AI research. Armando L. Sanchez/Chicago Tribune/TNS/Getty Chicago police use algorithmic systems to predict which people are most likely to be involved in a shooting, but they have proved largely ineffective. This week, the White House published its report on the future of artificial intelligence (AI) — a product of four workshops held between May and July 2016 in Seattle, Pittsburgh, Washington DC and New York City (see go.nature.com/2dx8rv6). During these events (which we helped to organize), many of the world’s leading thinkers from diverse fields discussed how AI will change the way we live. Dozens of presentations revealed the promise of using progress in machine learning and other AI techniques to perform a range of complex tasks in every­day life.

These ranged from the identification of skin alterations that are indicative of early-stage cancer to the reduction of energy costs for data centres. The workshops also highlighted a major blind spot in thinking about AI. A singular problem Three tools. Les géants du net réfléchissent à l'éthique de l'AI. Google Chairman Thinks AI Can Help Solve World's ‘Hard Problems’ Google’s chairman thinks artificial intelligence will let scientists solve some of the world’s "hard problems," like population growth, climate change, human development, and education. Rapid development in the field of AI means the technology can help scientists understand the links between cause and effect by sifting through vast quantities of information, said Eric Schmidt, executive chairman of Alphabet Inc., the holding company that owns Google.

“AI will play this role to navigate through this and help us.” It can also aid companies in designing new, personalized systems. In the future, Schmidt would like to see “Eric and Not-Eric,” he said at a conference in New York, where “Eric” is the flesh-and-blood Schmidt and“not-Eric is this digital thing that helps me.” Hot Field Google has been one of the most significant corporate backers of AI. ‘Profound Promise’ “Every single advance has occurred because smart people got in a room and eventually they standardized approaches” he said.

Artificial intelligence and nanotechnology 'threaten civilisation' | Technology. Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilisation” in a report by the Global Challenges Foundation. In the case of AI, the report suggests that future machines and software with “human-level intelligence” could create new, dangerous challenges for humanity – although they could also help to combat many of the other risks cited in the report. “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.

“And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. Robot ethics: Morals and the machine. Can AI really be ethical and unbiased? The Obama administration's report on the future of artificial intelligence mentions the word "ethics" 11 times and "bias" 23 times. special feature AI, Automation, and Tech Jobs There are some things that machines are simply better at doing than humans, but humans still have plenty going for them. Here's a look at how the two are going to work in concert to deliver a more powerful future for IT, and the human race. Read More What unclear about the future of artificial intelligence (AI) is whether you can put ethics into an algorithm and test it.

It's also unclear whether you can eliminate bias whether it's embedded into AI systems on purpose or by accident. There's also a question about ethics and bias on the global stage. Previously: Obama's report on the future of artificial intelligence: The main takeaways | We aren't getting ready for the AI revolution: That needs to change, and fast | What race is your AI? Let's ponder two excerpts from the Obama report on AI: Ethics: Bias: Forbes Welcome. Les robots commencent à enfreindre la loi, et personne ne sait quoi faire. Une exposition intitulée The Darknet: From Memes to Onionland, qui se tient à Zurich jusqu’au 11 janvier prochain, vient de créer la polémique, comme le raconte Le Guardian. Deux artistes suisses ont décidé de montrer les achats faits par un robot automatisé appelé le «Random Darknet Shopper», livré à lui-même sur le deep web, cet internet caché où se passent beaucoup d’activités illicites, avec 100 dollars de Bitcoins (une monnaie numérique) à dépenser par semaine.

Ainsi, l’exposition dévoile ses emplettes: un jean Diesel, une casquette de baseball, une caméra cachée, des cigarettes, un faux sac Louis Vuitton… et 10 pilules d’ecstasy. Reçues par la poste et cachées dans un boîtier DVD, ces pilules sont au cœur d’un «dilemme philosophique lancé par l’émergence des marchés du dark net, de l’anonymat sur Internet, et des Bitcoins», note le Guardian.

Mais une autre question se pose, beaucoup plus pragmatique: qui est responsable de l’achat de cette drogue?