background preloader

Stanford Artificial Intelligence Laboratory

Stanford Artificial Intelligence Laboratory

Human cues used to improve computer user-friendliness Lijun Yin wants computers to understand inputs from humans that go beyond the traditional keyboard and mouse. "Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist. "Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does." Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. To some extent, that's already possible. Yin says the next step would be enabling the computer to recognize a user's emotional state. "Computers only understand zeroes and ones," Yin says. He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Imagine if a computer could understand when people are in pain.

NCS — Neuromorphic Cognitive Systems Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations | SpringerLink A holy grail of computer vision is the complete understanding of visual scenes: a model that is able to name and detect objects, describe their attributes, and recognize their relationships. Understanding scenes would enable important applications such as image search, question answering, and robotic interactions. Much progress has been made in recent years towards this goal, including image classification (Perronnin et al. 2010; Simonyan and Zisserman 2014; Krizhevsky et al. 2012; Szegedy et al. 2015) and object detection (Girshick et al. 2014; Sermanet et al. 2013; Girshick 2015; Ren et al. 2015b). An important contributing factor is the availability of a large amount of data that drives the statistical models that underpin today’s advances in computational visual understanding. While the progress is exciting, we are still far from reaching the goal of comprehensive scene understanding. An image is often a rich scenery that cannot be fully described in one summarizing sentence.

Artificial intelligence creeps nearer via bee algorithms and crowdsourcing Yet crowdsourcing can be extremely effective, as MIT's Riley Crane showed in answering DARPA's challenge to find 10 weather balloons moored around the US. The MIT team used social networks and a pyramid of financial incentives to recruit volunteers, their friends and their friends of friends to report sightings - and won by finding all 10 within nine hours. "Not all hard problems can be solved by aggregation," he said. "This is a toy problem," he said, "but it's still starting to show some of the possibilities of what we're going to be able to do in future." Other interesting approaches included the MIT Media Lab's Alexander Wissner-Gross, who argues that if a planet-scale superhuman intelligence emerges it will most likely be from either the quantitative finance or advertising industries. Exactly how much AI should resemble humans is a long-running debate. Yet for centuries, explained Sharon Bertsch McGrayne, author of The Theory That Wouldn't Die, mentioning Bayes was career suicide.

Memristor Processor Solves Mazes  Memristors are the fourth fundamental building block of electronic circuits, after resistors, capacitors and inductors. They were famously predicted in the early 1970s but only discovered 30 years later at HP Labs in Palo Alto, California. Memristors are resistors that “remember” the state they were in, which changes according to the current passing through them. They are expected to revolutionise the design and capabilities of electronic circuits and may even make possible brain-like architectures in silicon, since neurons behave like memristors. Today, we see one of the first revolutionary circuits thanks to Yuriy Pershin at the University of South Carolina and Massimiliano Di Ventra at the University of California, San Diego, two pioneers in this field. Mazes are a class of graphical puzzles in which, given an entrance point, one has to find the exit via an intricate succession of paths, with the majority leading to a dead end, and only one, or few, correctly “solving” the puzzle.

Common Sense Computing Initiative | at the MIT Media Lab ILSVRC2017 IntroductionNewsHistoryTimetableChallengesFAQCitationContact Introduction This challenge evaluates algorithms for object localization/detection from images/videos at scale. News Jun 25, 2017: Submission server for VID is open, new additional train/val/test images for VID is available now, deadline for VID is extended to July 7, 2017 5pm PDT. History Tentative Timetable Mar 31, 2017: Development kit, data, and registration made available.Jun 30, 2017, 5pm PDT: Submission deadline.July 5, 2017: Challenge results will be released.July 26, 2017: Most successful and innovative teams present at CVPR 2017 workshop. Main Challenges I: Object localization The data for the classification and localization tasks will remain unchanged from ILSVRC 2012 . In this task, given an image an algorithm will produce 5 class labels $c_i, i=1,\dots 5$ in decreasing order of confidence and 5 bounding boxes $b_i, i=1,\dots 5$, one for each class label. Let $d(c_i,C_k) = 0$ if $c_i = C_k$ and 1 otherwise. 1. 2. 3.

ePlayer - Progressive VOD Programmed DNA Robot Goes Where Scientists Tell It | Nanotechnology | LiveScience A tiny robot made from strands of DNA could pave the way for mini-machines that can dive into the human body to perform surgeries, among other futuristic applications. While DNA-based robots have been made before, this latest real-life micromachine is the first one that researchers have successfully programmed to follow instructions on where to move. Once assembled, the robot can take multiple steps without any outside help, according to lead researcher Andrew Turberfield, a professor at the University of Oxford. "Turberfield's group has figured out a beautiful way to automate the movement of a strand of DNA along a track," said William Sherman, an associate scientist at Brookhaven National Laboratory, who was not involved in the study. DNA bots When thinking about robots, many of us picture humanlike machines with metal moving parts, like Rosie from "The Jetsons." Enter the DNA molecule. Takes instruction well

Association for the Advancement of Artificial Intelligence Fei-Fei Li Ph.D. | Associate Professor, Stanford University Dr. Fei-Fei Li is a Professor at the Computer Science Department at Stanford University. She received her Ph.D. degree from California Institute of Technology, and a B.S. in Physics from Princeton University. Fei-Fei is currently the Co-Director of the Stanford Human-Centered AI (HAI) Institute, a Stanford University Institute to advance AI research, education, policy and practice to benefit humanity, by bringing together interdisciplinary scholarship across the university. Prior to this, Fei-Fei served as the Director of Stanford AI Lab from 2013 to 2018. She is also a Co-Director and Co-PI of the Stanford Vision and Learning Lab, where she works with the most brilliant students and colleagues worldwide to build smart algorithms that enable computers and robots to see and think, as well as to conduct cognitive and neuroimaging experiments to discover how brains see and think. Curriculum vitae

Researchers Give Robots the Capability for Deceptive Behavior Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner look on as the black robot deceives the red robot into thinking it is hiding down the left corridor. (Click image for high-resolution version. Credit: Gary Meek) A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it’s actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception. We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.

International Society of Artificial Life Next Big Test for AI: Making Sense of the World - MIT Technology Review A few years ago, a breakthrough in machine learning suddenly enabled computers to recognize objects shown in photographs with unprecedented—almost spooky—accuracy. The question now is whether machines can make another leap, by learning to make sense of what’s actually going on in such images. A new image database, called Visual Genome, could push computers toward this goal, and help gauge the progress of computers attempting to better understand the real world. Teaching computers to parse visual scenes is fundamentally important for artificial intelligence. Visual Genome was developed by Fei-Fei Li, a professor who specializes in computer vision and who directs the Stanford Artificial Intelligence Lab, together with several colleagues. Li and colleagues previously created ImageNet, a database containing more than a million images tagged according to their contents. Visual Genome isn’t the only complex image database out there for researchers to experiment with.

Related: