background preloader

Fiona

Facebook Twitter

Research Blog: Meet Parsey’s Cousins: Syntax for 40 languages, plus new SyntaxNet capabilities. As you might have noticed, the parse trees for all of the sentences above look very similar.

Research Blog: Meet Parsey’s Cousins: Syntax for 40 languages, plus new SyntaxNet capabilities

This is because we follow the content-head principle, under which dependencies are drawn between content words, with function words becoming leaves in the parse tree. This idea was developed by the Universal Dependencies project in order to increase parallelism between languages. Parsey’s Cousins are trained on treebanks provided by this project and are designed to be cross-linguistically consistent and thus easier to use in multi-lingual language understanding applications. Using the same set of labels across languages can help us understand how sentences in different languages, or variations in the same language, convey the same meaning. In all of the above examples, the root indicates the main verb of the sentence and there is a passive nominal subject (indicated by the arc labeled with ‘nsubjpass’) and a passive auxiliary (‘auxpass’).

Research Blog: Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source. Casa Paganini - InfoMus. EyesWeb is an open software research platform for the design and development of real-time multimodal systems and interfaces.

Casa Paganini - InfoMus

EyesWeb Week 2014 - The 4rd Tutorial on the EyesWeb Open Platform (9-11 March 2014) EyesWeb is an open platform to support the design and development of real-time multimodal systems and interfaces. It supports a wide number of input devices including motion capture systems, various types of professional and low cost videocameras, game interfaces (e.g., Kinect, Wii), multichiannel audio input (e.g. microphones), analog inputs (e.g. for physiological signals). Supported outputs include multichannel audio, video, analog devices, robotic platforms. Various standards are supported, including OSC, MIDI, FreeFrame and VST plugins, ASIO, Motion Capture standards and systems (Qualisys), Matlab. By downloading any of the software below you agree with the license agreement. Forum is available at the following link Bugzilla is available at the following link.

New features

Marketing Automation to Derive Actionable Intelligence. Market Analysis. Sketch Toy: Draw sketches and share replays with friends! SnapEngage Live Chat en Español. Sketch Toy: Draw sketches and share replays with friends! Potential Sparkers. The Client Relations Factory. In-corporating body language into NLP (or, More notes on the design of automated body language) In-corporating body language into NLP (or, More notes on the design of automated body language) This article discusses how body language is a part of natural language, personality, and NLP design.

In-corporating body language into NLP (or, More notes on the design of automated body language)

The article covers various methods for approaching this problem and makes recommendations for the real-time generation of animation to accompany natural language for avatars and robots. It’s hard to communicate with words. Some researchers claim that almost half of our communication relies on things that aren’t words: body language, tone of voice, and stuff that just isn’t conveyed by text. This includes prosody (tone, pitch and speed of words), facial expression, hand gesture, stance and posture.

Body language makes up about 40% of natural language: This can be automatically generated. Most NLP systems today, be they Siri or Watson, amount to conducting chat via the thin pipe of a text interface. Videos do an excellent job of conveying the importance of body language. 1) Duration, or timing. Sparking Together.