background preloader

MIT Computer Science and Artificial Intelligence Laboratory

MIT Computer Science and Artificial Intelligence Laboratory

http://www.csail.mit.edu/

Related:  Veille technologiqueEvolution of Thinking & Thought

Futurist: We'll someday accept computers as human Futurist Ray Kurzweil spoke Monday at the South By Southwest Interactive conference. Ray Kurzweil, the acclaimed inventor and futurist, believes that humans and technology are merging Kurzweil on portentous sci-fi fears about computers: "I don't see it as them vs. us"He spoke to a crowd of more than 3,000 at the South by Southwest Interactive conference Austin, Texas (CNN) -- Any author or filmmaker seeking ideas for a sci-fi yarn about the implications of artificial intelligence -- good or bad -- would be smart to talk to Ray Kurzweil. Kurzweil, the acclaimed inventor and futurist, believes that humans and technology are blurring -- note the smartphone appendages in almost everyone's hand -- and will eventually merge.

The AI Revolution: Road to Superintelligence - Wait But Why PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.) Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. narswang What makes NARS different from conventional reasoning systems is its ability to learn from its experience and to work with insufficient knowledge and resources. NARS attempts to uniformly explain and reproduce many cognitive facilities, including reasoning, learning, planning, reacting, perceiving, categorizing, prioritizing, remembering, decision making, and so on. The research results include a theory of intelligence, a formal model of the theory, and a computer implementation of the model.

MIT management professor Tom Malone on collective intelligence and the “genetic” structure of groups Do groups have genetic structures? If so, can they be modified? Those are two central questions for Thomas Malone, a professor of management and an expert in organizational structure and group intelligence at MIT’s Sloan School of Management. In a talk this week at IBM’s Center for Social Software, Malone explained the insights he’s gained through his research and as the director of the MIT Center for Collective Intelligence, which he launched in 2006 in part to determine how collective intelligence might be harnessed to tackle problems — climate change, poverty, crime — that are generally too complex to be solved by any one expert or group.

Nutch Latest step by Step Installation guide for dummies: Nutch 0.9 By Peter P. Wang, Zillionics LLC Neuro Evolving Robotic Operatives Neuro-Evolving Robotic Operatives, or NERO for short, is a unique computer game that lets you play with adapting intelligent agents hands-on. Evolve your own robot army by tuning their artificial brains for challenging tasks, then pit them against your friends' teams in online competitions! New features in NERO 2.0 include an interactive game mode called territory capture, as well as a new user interface and more extensive training tools. NERO is a result of an academic research project in artificial intelligence, based on the rtNEAT algorithm. It is also a platform for future research on intelligent agent technology. The NERO project is run by the Neural Networks Group of the Department of Computer Sciences at the University of Texas at Austin .

Top notch AI system about as smart as a four-year-old, lacks commonsense Researchers have found that an AI system has an average IQ of a four-year-old child (Image: Shutterstock) Those who saw IBM’s Watson defeat former winners on Jeopardy! in 2011 might be forgiven for thinking that artificially intelligent computer systems are a lot brighter than they are. While Watson was able to cope with the highly stylized questions posed during the quiz, AI systems are still left wanting when it comes to commonsense. Computer learns language by playing games Computers are great at treating words as data: Word-processing programs let you rearrange and format text however you like, and search engines can quickly find a word anywhere on the Web. But what would it mean for a computer to actually understand the meaning of a sentence written in ordinary English — or French, or Urdu, or Mandarin? One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results. Starting from scratch

Common Sense Computing Initiative ConceptNet aims to give computers access to common-sense knowledge , the kind of information that ordinary people know but usually leave unstated. The data in ConceptNet is being collected from ordinary people who contributed it on sites like Open Mind Common Sense . ConceptNet represents this data in the form of a semantic network, and makes it available to be used in natural language processing and intelligent user interfaces. ConceptNet is an open source project, with a Python implementation and a REST API that anyone can use to add computational common sense to their own project. A great tool to help you use ConceptNet in your software is Divisi . Some of the nodes and links in ConceptNet.

A Non-Mathematical Introduction to Using Neural Networks The goal of this article is to help you understand what a neural network is, and how it is used. Most people, even non-programmers, have heard of neural networks. There are many science fiction overtones associated with them. And like many things, sci-fi writers have created a vast, but somewhat inaccurate, public idea of what a neural network is. Most laypeople think of neural networks as a sort of artificial brain. Meet the man who has been at the forefront of AI innovation for three decades Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram. To create one of those 3D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons.

Open Mind Common Sense Open Mind Common Sense (OMCS) is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. Since its founding in 1999, it has accumulated more than a million English facts from over 15,000 contributors in addition to knowledge bases in other languages. Much of OMCS's software is built on three interconnected representations: the natural language corpus that people interact with directly, a semantic network built from this corpus called ConceptNet, and a matrix-based representation of ConceptNet called AnalogySpace that can infer new knowledge using dimensionality reduction.[1] The knowledge collected by Open Mind Common Sense has enabled research projects at MIT and elsewhere.[2][3][4][5][6] History[edit] The project was the brainchild of Marvin Minsky, Push Singh, Catherine Havasi, and others.

Pranav Mistry Pranav Mistry (born 1981) is an Indian computer scientist and Inventor. At present, he is the head of Think Tank Team and Vice President of Research at Samsung. He is best known for his work on SixthSense and Samsung Galaxy Gear.[1] His research interests include Wearable Computing, Augmented reality, Ubiquitous computing, Gestural interaction, AI, Machine vision, Collective intelligence and Robotics.

OVERVIEW OF NEURAL NETWORKS This installment addresses the subject of computer-models of neural networks and the relevance of those models to the functioning brain. The computer field of Artificial Intelligence is a vast bottomless pit which would lead this series too far from biological reality -- and too far into speculation -- to be included. Neural network theory will be the singular exception because the model is so persuasive and so important that it cannot be ignored. Neurobiology provides a great deal of information about the physiology of individual neurons as well as about the function of nuclei and other gross neuroanatomical structures. But understanding the behavior of networks of neurons is exceedingly challenging for neurophysiology, given current methods. Nonetheless, network behavior is important, especially in light of evidence for so-called "emergent properties", ie, properties of networks that are not obvious from an understanding of neuron physiology.

Related:  ecw_guyArtificial Intelligence + ScienceMIT1MIT