background preloader

A. L. I. C. E. The Artificial Linguistic Internet Computer Entity

Related:  science & progresscomputational linguisticsAI

No robots in our homes, but many predictions about 2013 come true Jerry Lockenour couldn't predict what lay ahead for him 25 years ago when he stashed the Los Angeles Times' Magazine on a cabinet shelf. The April 3, 1988, magazine's cover illustration showed bubble-shaped cars traveling in "electro lanes" on a double-decked, high-rise-lined 1st Street in downtown's Civic Center area. The cover's headline was "L.A. 2013: Techno-Comforts and Urban Stresses — Fast Forward to One Day in the Life of a Future Family." Inside was a lengthy essay that described a day in the life of a fictional Granada Hills family in April 2013. Shorter secondary stories explored experts' opinions about future transportation issues, pollution, crime, overpopulation, computerized education and use of personal robots.

Eliza, Computer Therapist ELIZA emulates a Rogerian psychotherapist. ELIZA has almost no intelligence whatsoever, only tricks like string substitution and canned responses based on keywords. Yet when the original ELIZA first appeared in the 60's, some people actually mistook her for human. The illusion of intelligence works best, however, if you limit your conversation to talking about yourself and your life.

IBM Watson Developer Cloud Watson Text to Speech provides a REST API to synthesize speech audio from an input of plain text. Multiple voices, both male and female, are available across Brazilian Portuguese, English, French, German, Italian, Japanese, and Spanish. Once synthesized in real-time, the audio is streamed back to the client with minimal delay. The Text to Speech service now enables developers to control the pronunciation of specific words. Intended Use Anywhere there's a need to communicate using the spoken word, particularly assistance tools for the vision-impaired, reading-based education tools, or mobile applications. A novel written by AI passes the first round in a Japanese literary competition It may be time to add 'novelist' to the list of professions under threat from super-smart computer software, because a short story authored by artificial intelligence has made it through to the latter stages of a literary competition in Japan. It didn't scoop the top prize, but it's not a bad effort for a beginner. The AI software isn't self-aware enough to think up and submit its own work though (not yet, anyway) – the short-form novel was written with the help of a team of researchers from the Future University Hakodate in Japan. Human beings selected certain words and phrases to be used, and set up an overall framework for the story, before letting the software come up with the text itself. Of 1,450 or so novels accepted this year, 11 were written with the involvement of AI programs, the Japan News reports.

NETtalk (artificial neural network) NETtalk is an artificial neural network. It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified models that might shed light on the complexity of learning human level cognitive tasks, and their implementation as a connectionist model that could also learn to perform a comparable task. NETtalk is a program that learns to pronounce written English text by being shown text as input and matching phonetic transcriptions for comparison. NETtalk was created to explore the mechanisms of learning to correctly pronounce English text. The authors note that learning to read involves a complex mechanism involving many parts of the human brain.

Sensemaking – One Year Birthday Today. Cognitive Basics Emerging. Following a quiet two-plus year gestation period, G2 came to life January 28th, 2011 – one year ago today – when I formally announced its existence here: Sensemaking on Streams – My G2 Skunk Works Project: Privacy by Design (PbD). “This new technology, something that might be characterized as a “big data analytic sensemaking” engine, is designed to make sense of new observations as they happen, fast enough to do something about it, while the transaction is still happening.” Oh, the difference a year makes. Over the last twelve months G2 has evolved in many ways. Three of the more exciting characteristics to emerge are:

10 Pro-Gun Myths, Shot Down By cutting off federal funding for research and stymieing data collection and sharing, the National Rifle Association has tried to do to the study of gun violence what climate deniers have done to the science of global warming. No wonder: When it comes to hard numbers, some of the gun lobby's favorite arguments are full of holes. (This article has been updated.) Myth #1: They're coming for your guns. Fact-check: With as many as 310 million privately owned guns in America, it's clear there's no practical way to round them all up (never mind that no one in Washington is proposing this). Yet if you fantasize about rifle-toting citizens facing down the government, you'll rest easy knowing that America's roughly 70 to 80 million gun owners already have the feds and cops outgunned by a factor of around 79 to 1.

'I'll take human ingenuity for $2,000' A mere three years ago, the IBM computer now known as Watson was a "Jeopardy!"-playing fool. And that's putting it mildly. Watson had the verbal skills of a toddler. It botched the solutions to the game-show clues with howlers that filled IBM's research lab with laughter — and raised deep concern. Making Sense of What You Know Over the last few years, Jeff Jonas, chief scientist of the IBM Entity Analytics group and an IBM Fellow, led a development project at IBM code-named “G2”—technology to advance Sensemaking analytics. The system can make assertions based on data from real-time, real-world events. According to Jonas, “Sensemaking is designed to find the obvious.” InfoSphere* Sensemaking locates related observations that when viewed together point to something of interest—the evidence at hand making it obvious.

AI just 3D printed a brand-new Rembrandt, and it's shockingly good There's already plenty of angst out there about the prospect of jobs lost to artificial intelligence, but this week, artists got a fresh reason to be concerned. A new "Rembrandt" painting unveiled in Amsterdam is not the work of the Dutch master Rembrandt van Rijn at all, but rather the creation of a combination of technologies including facial recognition, AI, and 3D printing. Essentially, a deep-learning algorithm was trained on Rembrandt's 346 known paintings and then asked to produce a brand-new one replicating the artist's subject matter and style. Dubbed "The Next Rembrandt," the result is a portrait of a caucasian male, and it looks uncannily like the real thing. One particularly interesting detail about The Next Rembrandt project, which was a collaboration among several organizations including Dutch bank ING and Microsoft, is how the algorithm chose the subject for its painting, since it had to be entirely new.

The Science of Superheroes: Beyond "The Incredibles" Sacrificing Science While Hollywood filmmakers today are striving to make their movies as scientifically realistic as possible, Weinberg believes the comic books from the so-called Silver Age of comics (the late 1950s and the 1960s) were more grounded in science than most of what is being published today. "Most of the people who wrote comics back then were originally science fiction writers who knew their science and technology," he said. "Many of today's comic book writers seem to have learned their science from reading comic books and not from studying modern technology." Some comic book writers have suggested that good science means sacrificing an entertaining story. Weinberg disagrees.

Listeners needed for TTS standards intelligibility test « previous post | next post » Email from Ann Syrdal on behalf of the S3-WG91 Standards Working Group : The "Text-to-Speech Synthesis Technology" ASA Standards working group (S3-WG91) is conducting a web-based test that applies the method it will be proposing as an ANSI standard for evaluating TTS intelligibility. It is an open-response test ("type what you hear").