Christopher Manning Professor of Linguistics and Computer Science Natural Language Processing Group, Stanford University Chris Manning works on systems and formalisms that can intelligently process and produce human languages. His research concentrates on probabilistic models of language and statistical natural language processing; including text understanding, text mining, machine translation, information extraction, named entity recognition, part-of-speech tagging, probabilistic parsing and semantic role labeling, syntactic typology, computational lexicography, and other topics in computational linguistics and machine learning. Contact Brief Bio I'm Australian ("I come from a land of wide open spaces ...") Papers Most of my papers are available online in my publication list. Books My new book, an Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan, is now available in print. Conferences and Talks A few of my talks are available online. Research Projects Courses Other stuff
NCS — Neuromorphic Cognitive Systems Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations | SpringerLink A holy grail of computer vision is the complete understanding of visual scenes: a model that is able to name and detect objects, describe their attributes, and recognize their relationships. Understanding scenes would enable important applications such as image search, question answering, and robotic interactions. Much progress has been made in recent years towards this goal, including image classification (Perronnin et al. 2010; Simonyan and Zisserman 2014; Krizhevsky et al. 2012; Szegedy et al. 2015) and object detection (Girshick et al. 2014; Sermanet et al. 2013; Girshick 2015; Ren et al. 2015b). An important contributing factor is the availability of a large amount of data that drives the statistical models that underpin today’s advances in computational visual understanding. While the progress is exciting, we are still far from reaching the goal of comprehensive scene understanding. An image is often a rich scenery that cannot be fully described in one summarizing sentence.
Computational humor It is a relatively new area, with the first dedicated conference organized in 1996. Joke generators Pun generation An approach to analysis of humor is classification of jokes. A further step is an attempt to generate jokes basing on the rules that underlie classification. Q: What is the difference between leaves and a car? A: One you bake and brush and rake, the other you rush and brake and bake. Q: What do you call a strange market? A: A bizarre bazaar. Since then the approach has been improved, and the latest report, dated 2007, describes the STANDUP joke generator, implemented in the Java programming language. The STANDUP generator was tested on children within the framework of analyzing its usability for language skills development for children with communication disabilities, e.g., because of cerebral palsy. Other Stock and Strapparava described a program to generate funny acronyms. Joke recognition Applications Related research See also
Artificial intelligence creeps nearer via bee algorithms and crowdsourcing Yet crowdsourcing can be extremely effective, as MIT's Riley Crane showed in answering DARPA's challenge to find 10 weather balloons moored around the US. The MIT team used social networks and a pyramid of financial incentives to recruit volunteers, their friends and their friends of friends to report sightings - and won by finding all 10 within nine hours. "Not all hard problems can be solved by aggregation," he said. "This is a toy problem," he said, "but it's still starting to show some of the possibilities of what we're going to be able to do in future." Other interesting approaches included the MIT Media Lab's Alexander Wissner-Gross, who argues that if a planet-scale superhuman intelligence emerges it will most likely be from either the quantitative finance or advertising industries. Exactly how much AI should resemble humans is a long-running debate. Yet for centuries, explained Sharon Bertsch McGrayne, author of The Theory That Wouldn't Die, mentioning Bayes was career suicide.
Oussama Khatib Professor of Computer Science Professor of Computer Science Oussama Khatib Artificial Intelligence Laboratory Stanford University Stanford, CA 94305-9010 Gates Building, Room 144 Phone: +1 650 723 9753 Fax: +1 650 725-1449 Research Interests | Publications | Teaching | Short Bio | Contact Contact Unfortunately, I get a large amount of email. Scheduling Appointments To schedule an appointment, please send me an e-mail message with the specific times you are available. Graduate Admissions All the information you need on applying for admission to CS graduate programs is available here . E-mail
Common Sense Computing Initiative | at the MIT Media Lab ILSVRC2017 IntroductionNewsHistoryTimetableChallengesFAQCitationContact Introduction This challenge evaluates algorithms for object localization/detection from images/videos at scale. News Jun 25, 2017: Submission server for VID is open, new additional train/val/test images for VID is available now, deadline for VID is extended to July 7, 2017 5pm PDT. History Tentative Timetable Mar 31, 2017: Development kit, data, and registration made available.Jun 30, 2017, 5pm PDT: Submission deadline.July 5, 2017: Challenge results will be released.July 26, 2017: Most successful and innovative teams present at CVPR 2017 workshop. Main Challenges I: Object localization The data for the classification and localization tasks will remain unchanged from ILSVRC 2012 . In this task, given an image an algorithm will produce 5 class labels $c_i, i=1,\dots 5$ in decreasing order of confidence and 5 bounding boxes $b_i, i=1,\dots 5$, one for each class label. Let $d(c_i,C_k) = 0$ if $c_i = C_k$ and 1 otherwise. 1. 2. 3.
ePlayer - Progressive VOD Association for the Advancement of Artificial Intelligence Fei-Fei Li Ph.D. | Associate Professor, Stanford University Dr. Fei-Fei Li is a Professor at the Computer Science Department at Stanford University. She received her Ph.D. degree from California Institute of Technology, and a B.S. in Physics from Princeton University. Fei-Fei is currently the Co-Director of the Stanford Human-Centered AI (HAI) Institute, a Stanford University Institute to advance AI research, education, policy and practice to benefit humanity, by bringing together interdisciplinary scholarship across the university. Prior to this, Fei-Fei served as the Director of Stanford AI Lab from 2013 to 2018. She is also a Co-Director and Co-PI of the Stanford Vision and Learning Lab, where she works with the most brilliant students and colleagues worldwide to build smart algorithms that enable computers and robots to see and think, as well as to conduct cognitive and neuroimaging experiments to discover how brains see and think. Curriculum vitae
International Society of Artificial Life Next Big Test for AI: Making Sense of the World - MIT Technology Review A few years ago, a breakthrough in machine learning suddenly enabled computers to recognize objects shown in photographs with unprecedented—almost spooky—accuracy. The question now is whether machines can make another leap, by learning to make sense of what’s actually going on in such images. A new image database, called Visual Genome, could push computers toward this goal, and help gauge the progress of computers attempting to better understand the real world. Teaching computers to parse visual scenes is fundamentally important for artificial intelligence. Visual Genome was developed by Fei-Fei Li, a professor who specializes in computer vision and who directs the Stanford Artificial Intelligence Lab, together with several colleagues. Li and colleagues previously created ImageNet, a database containing more than a million images tagged according to their contents. Visual Genome isn’t the only complex image database out there for researchers to experiment with.
Vision Recogonition and Artificial Intelligence Computer Vision is the science and technology of obtaining models, meaning and control information from visual data. The two main fields of computer vision are computational vision and machine vision. Computational vision has to do with simply recording and analyzing the visual perception, and trying to understand it. Machine vision has to do with using what is found from computational vision and applying it to benefit people, animals, environment, etc. Computer Vision has influenced the field of Artificial Intelligence greatly. ASIMO, seen below, is another example of how computer vision is an important part of Artificial Intelligence. Artificial Intelligence can also use computer vision to communicate with humans. Artificial Intelligence also uses computer vision to recognize handwriting text and drawings. Another important part of Artificial Intelligence is passive observation and analysis.