background preloader

AI

Facebook Twitter

Artificial Intelligence Recreates Images From Inside The Human Brain. A team of researchers say they have used machine-learning to recreate images in our brains, from pictures subjects were looking at to things they remember seeing.

Artificial Intelligence Recreates Images From Inside The Human Brain

The research, which has not yet been peer-reviewed, was conducted by scientists from Kyoto University in Japan and led by Yukiyasu Kamitani. Using functional magnetic resonance imaging (fMRI), the team said they were able to reconstruct images seen by our brains. In their paper, available on bioRxiv, a number of images were presented that were recreated by the artificial intelligence, known as a deep neural network (DNN).

Each image was recreated pixel by pixel by the DNN, generating images that resembled the initial image. “The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images,” the team wrote in their paper. In this latest paper, the researchers used three subjects (two males aged 33 and 23, and one female aged 23). The true dangers of AI are closer than we think. William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014.

The true dangers of AI are closer than we think

He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions. Q: Should we be worried about superintelligent AI? A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. This is how AI bias really happens—and why it’s so hard to fix. Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data.

This is how AI bias really happens—and why it’s so hard to fix

We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

This Researcher Says AI Is Neither Artificial nor Intelligent. We first need to understand how the brain works if we want true AI. Next, this sensory input gets taken up by tens of thousands of cortical columns, each with a partial picture of the world.

We first need to understand how the brain works if we want true AI

They compete and combine via a sort of voting system to build up an overall viewpoint. That’s the thousand brains idea. In an AI system, this could involve a machine controlling different sensors—vision, touch, radar and so on—to get a more complete model of the world. Although, there will typically be many cortical columns for each sense, such as vision. Then there’s continuous learning, where you learn new things without forgetting previous stuff. Forget Boston Dynamics. This robot taught itself to walk with AI. Virtual limitations: Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard.

Forget Boston Dynamics. This robot taught itself to walk with AI

“Many of the videos that you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it—such as how friction works between a robot’s feet and the ground—can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.

Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. AI is learning how to create itself. But there’s another crucial observation here.

AI is learning how to create itself

Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of them—as means to an end. It’s this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. This is how AI bias really happens—and why it’s so hard to fix. An AI can simulate an economy millions of times to create fairer tax policy. In one early result, the AI found a policy that—in terms of maximizing both productivity and income equality—was 16% fairer than a state-of-the-art progressive tax framework studied by academic economists.

An AI can simulate an economy millions of times to create fairer tax policy

The improvement over current US policy was even greater. “I think it's a totally interesting idea,” says Blake LeBaron at Brandeis University in Massachusetts, who has used neural networks to model financial markets. In the simulation, four AI workers are each controlled by their own reinforcement-learning models. They interact with a two-dimensional world, gathering wood and stone and either trading these resources with others or using them to build houses, which earns them money.

The workers have different levels of skill, which leads them to specialize. DeepMind says it will release the structure of every protein known to science. In the last few months Baker’s team has been working with biologists who were previously stuck trying to figure out the shape of proteins they were studying.

DeepMind says it will release the structure of every protein known to science

“There's a lot of pretty cool biological research that's been really sped up,” he says. A public database containing hundreds of thousands of ready-made protein shapes should be an even bigger accelerator. “It looks astonishingly impressive,” says Tom Ellis, a synthetic biologist at Imperial College London studying the yeast genome, who is excited to try the database. But he cautions that most of the predicted shapes have not yet been verified in the lab. Atomic precision. A will to survive might take AI to the next level. Fiction is full of robots with feelings.

A will to survive might take AI to the next level

Like that emotional kid David, played by Haley Joel Osment, in the movie A.I. Or WALL•E, who obviously had feelings for EVE-uh. The robot in Lost in Space sounded pretty emotional whenever warning Will Robinson of danger. Not to mention all those emotional train-wreck, wackadoodle robots on Westworld. Stop talking about AI ethics. It’s time to talk about power. In doing that,, I wanted to really open up this understanding of AI as neither artificial nor intelligent.

Stop talking about AI ethics. It’s time to talk about power.

It’s the opposite of artificial. It comes from the most material parts of the Earth’s crust and from human bodies laboring, and from all of the artifacts that we produce and say and photograph every day. Neither is it intelligent. I think there’s this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings. AI datasets are filled with errors. It's warping what we know about AI. Yes, but: In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains racist and sexist labels as well as photos of people’s faces obtained without consent.

The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%. (20+) Facebook. We first need to understand how the brain works if we want true AI. Next, this sensory input gets taken up by tens of thousands of cortical columns, each with a partial picture of the world. They compete and combine via a sort of voting system to build up an overall viewpoint. That’s the thousand brains idea. In an AI system, this could involve a machine controlling different sensors—vision, touch, radar and so on—to get a more complete model of the world. Although, there will typically be many cortical columns for each sense, such as vision. Then there’s continuous learning, where you learn new things without forgetting previous stuff.

AI Technique Copies Human Memory To Minimize Data Storage Burden. Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay. " First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect - "surprisingly efficiently" - deep neural networks from "catastrophic forgetting" - upon learning new lessons, the networks forget what they had learned before.

Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting. They write, "One solution would be to store previously encountered examples and revisit them when learning something new. These weird, unsettling photos show that AI is getting smarter. Of all the AI models in the world, OpenAI’s GPT-3 has most captured the public’s imagination. It can spew poems, short stories, and songs with little prompting, and has been demonstrated to fool people into thinking its outputs were written by a human. But its eloquence is more of a parlor trick, not to be confused with real intelligence. Nonetheless, researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI.

GPT-3 trained on an enormous amount of text data. What if the same methods were trained on both text and images? Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. Fill in the blank. A robot wrote this entire article. Are you scared yet, human? I am not a human. I am a robot. A thinking robot. Sztuczna inteligencja – trzy mity, które trzeba rozbroić. Sztuczna inteligencja (AI) ma renomę podobną do atomu. Jedni twierdzą, że jest rozwiązaniem na dręczące ludzkość problemy, drudzy, że zdetronizuje, a nawet zgładzi homo sapiens. Clearview app lets strangers find your name, info with snap of a photo, report says. What if a stranger could snap your picture on the sidewalk, then use an app to quickly discover your name, address and other details? Forskare slår larm: Stora risker med AI.

Visualizing Artificial Intelligence - Noun Project. 1. A Successful Artificial Memory Has Been Created. We learn from our personal interaction with the world, and our memories of those experiences help guide our behaviors. Experience and memory are inexorably linked, or at least they seemed to be before a recent report on the formation of completely artificial memories.

Using laboratory animals, investigators reverse engineered a specific natural memory by mapping the brain circuits underlying its formation. They then “trained” another animal by stimulating brain cells in the pattern of the natural memory. Doing so created an artificial memory that was retained and recalled in a manner indistinguishable from a natural one. Can you make AI fairer than a judge? Play our courtroom algorithm game. AI can predict when someone will die with unsettling accuracy. (3) AI Creates Near Perfect Images Of People, Dogs and More. An AI app that “undressed” women shows how deepfakes harm the most vulnerable. AI Trained on Old Scientific Papers Makes Discoveries Humans Missed. This is how AI bias really happens—and why it’s so hard to fix. Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data.

We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. AI can predict when someone will die with unsettling accuracy. First Ever Non-invasive Brain-Computer Interface Developed. A team of researchers from Carnegie Mellon University, in collaboration with the University of Minnesota, has made a breakthrough in the field of noninvasive robotic device control. Using a noninvasive brain-computer interface (BCI), researchers have developed the first-ever successful mind-controlled robotic arm exhibiting the ability to continuously track and follow a computer cursor. Being able to noninvasively control robotic devices using only thoughts will have broad applications, in particular benefiting the lives of paralyzed patients and those with movement disorders.

BCIs have been shown to achieve good performance for controlling robotic devices using only the signals sensed from brain implants. Ian Goodfellow. Ian Goodfellow. A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. Former Google engineer is developing a god based on AI. Economy. A machine has figured out Rubik’s Cube all by itself. The Artificial Intelligence that Composes Like the Beatles and Writes Like J. K. Rowling. One of the fathers of AI is worried about its future. A radical new neural network design could overcome big challenges in AI. Clever AI Hid Data From Its Creators to Cheat at Tasks They Gave It. Artificial intelligence turns brain activity into speech. Inside the world of AI that forges beautiful art and terrifying deepfakes.

The Risks and Rewards of Artificial Intelligence. Canada and France plan an international panel to assess AI’s dangers. Establishing an AI code of ethics will be harder than people think. Inside the world of AI that forges beautiful art and terrifying deepfakes. An AI physicist can derive the natural laws of imagined universes. The first piece of AI-generated art to come to auction. Researchers Just Turned On the World's Most Powerful Computer. The Great AI Paradox. AI Has Beaten Humans at Lip-reading. Google Cofounder Sergey Brin Warns of AI's Dark Side. The Limits of Artificial Intelligence and Deep Learning. This AI Learns Your Fashion Sense and Invents Your Next Outfit. AI-controlled brain implants help improve people’s memory. Artificial Intelligence: The Complete Guide. Don’t Make Artificial Intelligence Artificially Stupid in the Name of Transparency.

The Great AI Paradox. Here are the secrets to useful AI, according to designers. China wants to make the chips that will add AI to any gadget. Philosophical Disquisitions. MIT Technology Review Events - EmTech Digital. A Game of Civilization May Help People Understand AI’s Existential Threat. What Can AI Experts Learn from Buddhism? A New Approach to Machine-Learning Ethics Aims to Find Out. Google’s AI can create better machine-learning code than the researchers who made it. How AI Will Keep You Healthy. Neural Networks Are Learning What to Remember and What to Forget. Google Has Released an AI Tool That Makes Sense of Your Genome. Artificial Intelligence Seeks An Ethical Conscience. In just 4 hours, Google's AI mastered all the chess knowledge in history.

AI Can Be Made Legally Accountable for Its Decisions. Google’s Artificial-Intelligence Wizard Unveils a New Twist on Neural Networks. How We Feel About Robots That Feel. Put Humans at the Center of AI - MIT Technology Review. Six Oddities of Artificial Intelligence. AI on Culture. Clever Machines Learn How to Be Curious (And Play Super Mario Bros.) AI Shouldn’t Believe Everything It Hears - MIT Technology Review. Graphic Design Is About to Be Upended By AI. We Just Created an Artificial Synapse That Can Learn Autonomously. A Top Poker-Playing Algorithm Is Cleaning Up in China - MIT Technology Review.

When artificial intelligence meets human stupidity     Olga Russakovsky. AI Learns Sexism Just by Studying Photographs - MIT Technology Review. Elon Musk Is Very Freaked Out by This Artificial Intelligence System's Victory Over Humans. How Moss Helped Machine Vision Overcome an Achilles’ Heel - MIT Technology Review. Real or Fake? AI Is Making It Very Hard to Know - MIT Technology Review.

Chiny: bot ze sztuczną inteligencją znika z sieci za krytykę partii - Bankier.pl. Google's new AI has learned to become "highly aggressive" in stressful situations. 7 Reasons You Should Embrace, Not Fear, Artificial Intelligence. Artificial Intelligence Is Already a Better Artist Than You Are. Google Created an AI That Can Learn Almost as Fast as a Human. In 2047, Computers Will Have An IQ Of Over 10,000.

Google’s Artificial Intelligence Learns “Highly Aggressive” Behaviour & Betrayal Pay Off – Collective Evolution. The Government Isn’t Doing Enough to Solve Big Problems with AI. AI Begins to Understand the 3-D World. Neural Network Learns to Identify Criminals by Their Faces. Google’s AI translation tool seems to have invented its own secret internal language.