background preloader

Translation

Facebook Twitter

Vocre apportera la traduction des appels visio. Vocre 2.0 Beta - LAUNCH Conference Video. Bilingual avatar speaks Mundie language. (PhysOrg.com) -- This week's Microsoft Big Idea event, TechFest 2012, presented the latest advances on the part of researchers at Microsoft.

Bilingual avatar speaks Mundie language

A bilingual talking head received much of the attention. Called "Monolingual TTS," the Microsoft research effort involves software that can translate the user’s speech into another language and in a voice that sounds like the original user’s. As Microsoft explains, with the use of a speaker’s monolingual recording, the system's algorithm can render speech sentences in different languages for building "mixed coded bilingual text to speech (TTS) systems. " According to the team, “We have recordings of 26 languages which are used to build our TTS of corresponding languages. By using the new approach, we can synthesize any mixed language pair out of the 26 languages.” The software does this by first “learning” what the user’s voice sounds like.

A synthetic version of Mundie's voice, in English, welcomed the audience to Microsoft Research.

Best pratice translation

English to French, Italian, German & Spanish Dictionary - WordRe. Urban Dictionary, February 4: Bale Out - Mozilla Firefox. Prononcation. Chinglish: The New 言 in Town! Web tools. PicTranslator Intro. Sakhr Mobile S2S Arabic Translator for Government & Enterprise ( Le web social s'attelle. Sur la plate-forme collaborative Conyac, les utilisateurs lancent des demandes d'adaptation de textes dans d'autres langues.

Le web social s'attelle

Ils rémunèrent ensuite les traducteurs qui se portent volontaires au montant de leur choix. Pour fournir aux internautes des traductions de bonne qualité à faible coût, le Japonais anydooR a lancé la plate-forme Conyac.cc. Son rôle ? Mettre en relation un individu qui cherche à traduire un contenu, et des volontaires prêts à effectuer cette tâche à prix réduit. Pour commencer à utiliser la plate-forme, un utilisateur doit acquérir des points Conyac au prix de 0.007 euros l'unité.

Automatic / machine Translation

Vous François-Régis Chaumartin ? Francois-Regis Chaumartin le 2010-04-28 à 17h52 from Jean Michel Billaut on Vimeo.

vous François-Régis Chaumartin ?

Le droit c'est bien connu, conduit à tout... Mais la linguistique aussi... Comment faire comprendre à une machine le langage humain ? La société de François : Proxem est au confluent de 2 groupes de technologies : d'une part le traitement automatique du langage naturel (oral ou écrit), et d'autre part le Web sémantique... Comment faire des raisonnements à partir du langage ou des données structurées ? François nous fait un topo des plus intéressants sur ces groupes de technologies.

Comment "monitorer" en temps réel le buzz médiatique ? Qu'elle est la concurrence de Proxem ? Quel est son business model ? Vous faites une visio/skype avec un japonais qui ne parle pas le français. Proxem > Home. Sphinx-4 - A speech recognizer written entirely in the Java(TM) Overview Sphinx4 is a pure Java speech recognition library.

Sphinx-4 - A speech recognizer written entirely in the Java(TM)

It provides a quick and easy API to convert the speech recordings into text with the help CMUSphinx acoustic models. It can be used on servers and in desktop applications. Beside speech recognition Sphinx4 helps to identify speakers, adapt models, align existing transcription to audio for timestamping and more. Sphinx4 supports US English and many other languages. Using in your projects As any library in Java all you need to do to use sphinx4 is to add jars into dependencies of your project and then you can write code using the API. The easiest way to use modern sphinx4 is to use modern build tools like Apache Maven or Gradle. <project> ...

Then add sphinx4-core to the project dependencies: <dependency><groupId>edu.cmu.sphinx</groupId><artifactId>sphinx4-core</artifactId><version>5prealpha-SNAPSHOT</version></dependency> Add sphinx4-data to dependencies as well if you want to use default acoustic and language models: Sphinx speech to text - Metavid. Sphinx is a speech recognition system that is being developed at Carnegie Mellon University.

Sphinx speech to text - Metavid

OpenGov is using sphinx3 primarily, which works under GNU/Linux. CMU has released: Sphinx. From Sourceforge: Sphinx is a speaker-independent large vocabulary continuous speech recognizer under Berkeley's style license.

Sphinx

It is also a collection of open source tools and resources that allows researchers and developers to build speech recognition system. Sphinx project pages The CMU Sphinx Group Open Source Speech Recognition EnginesSphinx Sourceforge Project CMU Sphinx: Open Source Speech Recognition Sphinx and Asterisk A detailed example and code(link mirror) for a client/server approach. There are also some discussions to be found on the Asterisk-Users list for example here.

Quote by Stephan A. It is fairly easy to integrate Asterisk with Sphinx, the only trouble is that youneed to have an Acoustic Model (AM) for 8KHz, which are not (yet) readily available. There is a Language Model (LM) and AM included with Sphinx, but it is designed for a sampling rate of 16KHz andtherefore does not work with Asterisk. Example - sphinx.agi See also. Speech to text conversion in Linux. Part 17. Conversation mode in Google Translate.