background preloader

NVDA

NVDA

Daisy Talking Book format DTBook (an acronym for DAISY Digital Talking Book) or DAISY XML is a XML-based document file format. It is used in EPUB 2.0 e-books and DAISY Digital Talking Book, as well as other places. Unlike other document file formats such as ODF DTBook puts a strong emphasis on structural encoding, but in comparison to other structural file formats such as DocBook and TEI it is fairly simple. DTBook was developed by the Daisy Consortium as an accessible file format similar to HTML, with special regard to the requirements of the visually impaired. DTBook is further developed by the Daisy Consortium and is defined with a DTD as part of the NISO standard Z39.86-2005 NIMAS[edit] NIMAS (National Instructional Materials Accessibility Standard) – an U.S. standard for electronic books for the visually impaired – defines a subset of DTBook XML elements. Example[edit] <? External links[edit]

C "Voice Browser" Working Group The Voice Browser Working Group's mission is to support browsing the web by voice. The web is much more than just the web pages you can see, it is also the web pages you can hear and speak to. While end users are familiar with interacting with visual html web pages rendered in their browser of choice, many users might be surprised to realize that today they regularly interact with the voice web through VoiceXML (VXML) and other technologies developed and standardized by the Voice Browser Working Group. Just as many sites have an html presence on the web for visual browsing, most large companies have a vxml presence on the web for voice browsing, which is most often accessed by calling the companies phone number. Unlike most visual web browsers, voice web browsers are typically without chrome and run in the cloud, so they are often transparent to the end user. Voice Browser Specifications There are a suite of independent standards that are also supported as a parts of VoiceXML.

VoiceXML VoiceXML (VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals, that are developed and deployed in an analogous manner to the interaction between web browsers, which render Hypertext Markup Language (HTML) in visual applications, and the servers that deliver them. VoiceXML documents are interpreted by a voice browser. The VoiceXML document format is based on Extensible Markup Language (XML). Usage[edit] VoiceXML applications are commonly used in many industries and segments of commerce. VoiceXML has tags that instruct the voice browser to provide speech synthesis, automatic speech recognition, dialog management, and audio playback. <vxml version="2.0" xmlns=" Hello world! History[edit] Future versions of the standard[edit] Related standards[edit]

JVoiceXML - The Open Source VoiceXML Interpreter Building VoiceXML Dialogs Introduction Until fairly recently, the web has primarily delivered information and services using visual interfaces, on computers equipped with displays, keyboards, and pointing devices. The web revolution had largely bypassed the huge market of customers of information and services represented by the worldwide installed base of telephones for which voice input and audio output provided the primary means of interaction. VoiceXML 2.0 [VXML2], a Standard recently released by the W3c [W3C] is helping to change that. Building on top of the market established in 1999 by the VoiceXML Forum's VoiceXML 1.0 specification [VXML1], VoiceXML 2.0 and several complementary standards are changing the way we interact with voice services and applications - by simplifying the way these services and applications are built. VoiceXML is an XML-based [XML] language, designed to be used on the Web. The Menu Element Most VoiceXML dialogs are built from one of two elements. Consider this <menu> example:

Weblog: Language-Based Interfaces, part 1: The Problem UI Design Fundamentals What would the web be like if you could tell it what you want to do as easily as you currently tell it where you want to go? Mozilla Labs is starting to experiment with linguistic interfaces. That is, we’re playing around with interfaces where you type commands and stuff happens — in much the same way that you can type a location into the address bar in order to go somewhere. I think this is cool because, for one thing, I think language-based interfaces are seriously under-explored compared to pointing-based interfaces. For another thing, I used to work on a project called Enso. What makes a good linguistic UI? Here’s my current theory. It’s easy to learn.It’s efficient.It’s expressive. Those are the three “E”s. “Easy to learn” should be self-explanatory. Not discoverable: There’s no guidance given to a first-time user. But the CLI isn’t all bad. The second good point is that it’s not just a set of commands, it’s a language. Impossible?

Six UX Lessons Learned from the New Facebook App, Paper Paper — Stories from Facebook News Feed What happens when Facebook takes on the mobile experience head first? Paper. With over 3,000 reviews and a 4+ rating on the app store, it’s clear that Paper is quickly becoming an app to be reckoned with. To explore the success of the app, we ran a study with 105 of our mobile testers. Here is what we learned: People love guided tutorials they can control.People appreciate the lack of navigational elements.People really want to curate their news feed.People are delighted with subtle animation.People want to browse their content distraction free.People still want to share what they love with their friends. PDF: View the results of this study.View our highlight reel of users exploring the app, or watch it in the YouTube video below: Lesson 1: People love guided tutorials they can toggle. We asked study participants to download the new Paper app and give it a go. Lesson 2: People appreciate the lack of navigational elements. About Stef Miller

Related: