background preloader

AI

Facebook Twitter

A list of artificial intelligence tools you can use today — for personal use (1/3) The Unreasonable Effectiveness of Recurrent Neural Networks. There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times.

What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me.

This post is about sharing some of that magic with you. Recurrent Neural Networks Sequences. RNN computation. Code. Artificial General Intelligence in Second Life  Virtual worlds are the golden path to achieving Artificial General Intelligence and positive Singularity, Dr Ben Goertzel’s, CEO of Novamente LLC and author of “The Hidden Pattern: A Patternist Philosophy of Mind” explained in his presentation “Artificial General Intelligence in Virtual Worlds” given at the Singularity Summit 2007 earlier this month. According to Goertzel, Singularity is no longer a far future idea. About a year ago Goertzel gave a talk “Ten Years to a Positive Singularity — If We Really, Really Try.” The slide that opens this post was in Goerzel’s presentation. It depicts an Archailect, Archai from the Orion’s Arm science-fiction world — a mega scale brain, “sophont or sophont cluster that has grown so vast as to become a god-like entity.”

What is singularity? Harnessing the wisdom of crowds in the quintessential rapid prototyping environment for embodied virtual agents — Second Life – may well turn Artificial General Intelligence into an idea with traction. The AI Revolution: Our Immortality or Extinction. Note: This is Part 2 of a two-part series on AI. Part 1 is here. PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.) We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.← open these i.e. Timeline. Simplicity is key to co-operative robots -- ScienceDaily. A way of making hundreds -- or even thousands -- of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

The team, working in the Sheffield Centre for Robotics (SCentRo), in the University's Faculty of Engineering, has programmed extremely simple robots that are able to form a dense cluster without the need for complex computation, in a similar way to how a swarm of bees or a flock of birds is able to carry out tasks collectively. The work, published April 17, 2014 in the International Journal of Robotics Research, paves the way for robot 'swarms' to be used in, for example, the agricultural industry where precision farming methods could benefit from the use of large numbers of very simple and cheap robots. Each robot uses just one sensor that tells them whether or not they can 'see' another robot in front of them. Video of the swarming robots can be seen at. Artificially evolved robots that efficiently self-organize tasks: Eliseo Ferrante and colleagues evolved complex robot behaviors using artificial evolution and detailed robotics simulations. -- ScienceDaily.

PLOS. "Artificially evolved robots that efficiently self-organize tasks: Eliseo Ferrante and colleagues evolved complex robot behaviors using artificial evolution and detailed robotics simulations.. " ScienceDaily. ScienceDaily, 6 August 2015. <www.sciencedaily.com/releases/2015/08/150806144425.htm>. PLOS. (2015, August 6). Artificially evolved robots that efficiently self-organize tasks: Eliseo Ferrante and colleagues evolved complex robot behaviors using artificial evolution and detailed robotics simulations.. ScienceDaily. PLOS. Tiny robots inspired by pine cones -- ScienceDaily. Most efforts to develop bio-inspired robots center on mimicking the motions of animals: but plants move too -- even if most of their motions are so slow they can't be detected by the naked eye. The mechanism involved in plant movement is much simpler than that of animals using muscles.

To generate motion, plants and some seeds -- such as mimosa leaves, Venus flytraps and pine cones -- simply harness the supply or deprival of water from plant tissues. The future of bio-inspired engineering or robotics will greatly benefit from lessons learned from plants, according to a group of Seoul National University researchers. During the American Physical Society's 68th Annual Meeting of the Division of Fluid Dynamics, Nov. 22-24, 2015, in Boston, they will share details about how studying plants enabled them to create tiny robots powered exclusively by changes in humidity. If environmental humidity increases, the bilayer bends from changes in length-wise swelling. Sounds too easy, right? Comp Neuro Models. Virtual selves can help boost better real world health, exercise habits -- ScienceDaily. Customizing an avatar to better resemble its human user may lead to improved health and exercise behaviors, according to a team of researchers.

"There's an emerging body of research that suggests that avatars in virtual environments are an effective way to encourage people to be more healthy," said T. Franklin Waddell, a doctoral candidate in mass communications, Penn State. "What our study was trying to do was finding out why avatars have these effects and also to determine if avatars can encourage people to be healthy, particularly encourage those who might have rather low interest in exercising and healthy eating. " "Our other research has shown that customizing avatars can make users feel more agentic and take charge of their welfare," said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory, who worked with Waddell and Joshua Auriemma, a software engineer at DramaFever.

Inside Facebook’s Quest for Software That Understands You. The first time Yann LeCun revolutionized artificial intelligence, it was a false dawn. It was 1995, and for almost a decade, the young Frenchman had been dedicated to what many computer scientists considered a bad idea: that crudely mimicking certain features of the brain was the best way to bring about intelligent machines.

But LeCun had shown that this approach could produce something strikingly smart—and useful. Working at Bell Labs, he made software that roughly simulated neurons and learned to read handwritten text by looking at many different examples. Bell Labs’ corporate parent, AT&T, used it to sell the first machines capable of reading the handwriting on checks and written forms.

To LeCun and a few fellow believers in artificial neural networks, it seemed to mark the beginning of an era in which machines could learn many other skills previously limited to humans. It wasn’t. “This whole project kind of disappeared on the day of its biggest success,” says LeCun. Deep history. Single Artificial Neuron Taught to Recognize Hundreds of Patterns. Artificial intelligence is a field in the midst of rapid, exciting change. That’s largely because of an improved understanding of how neural networks work and the creation of vast databases to help train them. The result is machines that have suddenly become better at things like face and object recognition, tasks that humans have always held the upper hand in (see “Teaching Machines to Understand Us”). But there’s a puzzle at the heart of these breakthroughs. Although neural networks are ostensibly modeled on the way the human brain works, the artificial neurons they contain are nothing like the ones at work in our own wetware.

Artificial neurons, for example, generally have just a handful of synapses and entirely lack the short, branched nerve extensions known as dendrites and the thousands of synapses that form along them. Proximal and distal dendrites all make thousands connections, called synapses, to the axons of other nerve cells. This is the conventional process of learning.

A bit about Neuro-computing (science) Artificial Neural Networks for Beginners » Loren on the Art of MATLAB. Deep Learning is a very hot topic these days especially in computer vision applications and you probably see it in the news and get curious. Now the question is, how do you get started with it? Today's guest blogger, Toshi Takeuchi, gives us a quick tutorial on artificial neural networks as a starting point for your study of deep learning. Contents MNIST Dataset Many of us tend to learn better with a concrete example. Let me give you a quick step-by-step tutorial to get intuition using a popular MNIST handwritten digit dataset. Train.csv - training datatest.csv - test data for submission Load the training and test data into MATLAB, which I assume was downloaded into the current folder. Tr = csvread('train.csv', 1, 0); sub = csvread('test.csv', 1, 0); The first column is the label that shows the correct digit for each sample in the dataset, and each row is a sample.

Data Preparation You will be using the nprtool pattern recognition app from Neural Network Toolbox. Network Architecture Closing. Deep Learning for NLP - NAACL 2013 Tutorial. A tutorial given at NAACL HLT 2013. Based on an earlier tutorial given at ACL 2012 by Richard Socher, Yoshua Bengio, and Christopher Manning. By Richard Socher and Christopher Manning Slides NAACL2013-Socher-Manning-DeepLearning.pdf (24MB) - 205 slides. Videos Part 1Part 2Sorry, Flash videos only Abstract Machine learning is everywhere in today's NLP, but by and large machine learning amounts to numerical optimization of weights for human designed representations and features. Outline References All references we referred to in one pdf file Further Information A very useful assignment for getting started with deep learning in NLP is to implement a simple window-based NER tagger in this exercise we designed for the Stanford NLP class cs224N. A network of artificial neurons learns to use human language: A computer simulation of a cognitive model entirely made up of artificial neurons learns to communicate through dialogue starting from a state of tabula rasa -- ScienceDaily.

A group of researchers from the University of Sassari (Italy) and the University of Plymouth (UK) has developed a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a state of "tabula rasa," only through communication with a human interlocutor. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) and it is described in an article published in the international scientific journal PLOS ONE. This research sheds light on the neural processes that underlie the development of language. How does our brain develop the ability to perform complex cognitive functions, such as those needed for language and reasoning? This is a question that certainly we are all asking ourselves, to which the researchers are not yet able to give a complete answer.

An AI anthology: Tracking the rise of self-learning computers. Artificial intelligence methods have been around for decades, but the pace of innovation has picked up significantly over the past few years. This is especially true in areas such as computer vision, language processing and speech recognition, where new approaches have greatly improved computers’ ability to learn — to really understand what they see, hear and read. Over the years, Gigaom has covered many attempts to improve the way that computers respond to our voices, movements or other visual cues, and identify the words we type and the pictures we take. These technologies have and certainly will continue to change the way we interact with computers and consume the incredible amount of digital data we’re producing. The work being done in universities and corporate research labs right now to build self-learning vision, voice and language models will only make our experiences better.

We will update it regularly as new product launches, research advances and industry news occur. Simplicity is key to co-operative robots -- ScienceDaily. Almost human robots: how to tell them apart form a real person? -- ScienceDaily. Approximately 50 percent of the people involved in the study said they could not confirm which one was the robot. Can you imagine being in front of an android and a human and not able to identify which one is real? Mexican researcher David Silvera-Tawil discovered, after conducting a study in Australia, that this caused high anxiety and even fear in people, when exposed to the geminoids robots, named for their similarity to humans.

The research was aided by Michael Garbutt of the University of New South Wales in Australia, it served to determine the behavior of humans to interact with geminoids, which are identical replicas of people and can operate remotely. Also, another objective was to determine whether the robotics industry would profit from manufacturing androids despite generating uncertainty and anxiety in people or whether it is preferable to continue with the production of mechanical or humanoid robots that do not cause any disturbance to human behavior.

Humans can empathize with robots: Neurophysiological evidence for human empathy toward robots in perceived pain -- ScienceDaily. Researchers have presented the first neurophysiological evidence of humans' ability to empathize with a robot in perceived pain. Event-related brain potentials in human observers, reflecting empathy with humanoid robots in perceived pain, were similar to those for other humans in pain, except at the beginning of the top-down process of empathy. This difference may be caused by humans' difficulty in taking a robot's perspective. Empathy is a basic human ability. We often feel empathy toward and console others in distress. Is it possible for us to emphasize with humanoid robots? Since robots are becoming increasingly popular and common in our daily lives, it is necessary to understand our interaction with robots in social situations.

However, it is not clear how the human brain responds to robots in empathic situations. These results suggest that we empathize with humanoid robots in a similar fashion as we do with other humans. New Approaches to Robot Navigation - DZone IoT. State_Analysis_Ontology _in_SysML. What’s new in SysML 1.4 – Constraining decompositions. Case-Based Reasoning Software. Poseidon Database, A Neural network based schemaless semantic database. Forget Humans vs. Machines: It’s a Humans + Machines Future. IBM’s Jeff Jonas on Baking Data Privacy into Predictive Analytics. Making Sense of What You Know. G2 | Sensemaking – One Year Birthday Today. Cognitive Basics Emerging. Text to Speech | IBM Watson Developer Cloud. A. L. I. C. E. The Artificial Linguistic Internet Computer Entity.

Artificial intelligence. | Joke of the Day. Siri Has a Dark Side Hilarious Answers to Strange Questions. Self-Healing Robot Can Adapt To Injury Within Minutes. Robotics. Robots.