It could be one of the most important innovations on the Internet since the browser. Imagine an open-source, crowd-sourced, community-moderated, distributed platform for sentence-level annotation of the Web. In other words, a way to cut through the babble and restore some sanity and trust. The Internet, peer-reviewed
It seems like a great idea: Provide instant corrections to web-surfers when they run across obviously false information on the Internet. But a new study suggests that this type of tool may not be a panacea for dispelling inaccurate beliefs, particularly among people who already want to believe the falsehood. “Real-time corrections do have some positive effect, but it is mostly with people who were predisposed to reject the false claim anyway,” said R. Kelly Garrett, lead author of the study and assistant professor of communication at Ohio State University. “The problem with trying to correct false information is that some people want to believe it, and simply telling them it is false won’t convince them.” False beliefs persist, even after instant online corrections
AT 7 years old, Gilad Elbaz wrote, “I want to be a rich mathematician and very smart.” That, he figured, would help him “discover things like time machines, robots and machines that can answer any question.” In the 34 years since, Mr. Factual’s Gil Elbaz Wants to Gather the Data Universe
Welcome to snopes.com, the definitive Internet reference source for urban legends, folklore, myths, rumors, and misinformation. Use the search box above to locate your item of interest, or click one of the icons below to browse the site by category. Urban Legends Reference Pages © 1995-2014 by snopes.com. This material may not be reproduced without permission. snopes and the snopes.com logo are registered service marks of snopes.com.
5D optical memory in glass could record the last evidence of civilization Using nanostructured glass, scientists at the University of Southampton have, for the first time, experimentally demonstrated the recording and retrieval processes of five dimensional digital data by femtosecond laser writing. The storage allows unprecedented parameters including 360 TB/disc data capacity, thermal stability up to 1000°C and practically unlimited lifetime. Coined as the 'Superman' memory crystal, as the glass memory has been compared to the "memory crystals" used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz, which is able to store vast quantities of data for over a million years. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
SExpand In what could prove to be a major breakthrough in quantum memory storage and information processing, German researchers have frozen the fastest thing in the universe: light. And they did so for a record-breaking one minute. It sounds weird and it is. The reason for wanting to hold light in its place (aside from the sheer awesomeness of it) is to ensure that it retains its quantum coherence properties (i.e. its information state), thus making it possible to build light-based quantum memory.
Million-Year Data Storage Disk Unveiled Back in 1956, IBM introduced the world’s first commercial computer capable of storing data on a magnetic disk drive. The IBM 305 RAMAC used fifty 24-inch discs to store up to 5 MB, an impressive feat in those days. Today, however, it’s not difficult to find hard drives that can store 1 TB of data on a single 3.5-inch disk. But despite this huge increase in storage density and a similarly impressive improvement in power efficiency, one thing hasn’t changed. The lifetime over which data can be stored on magnetic discs is still about a decade.
How Quantum Computers and Machine Learning Will Revolutionize Big Data - Wired Science When subatomic particles smash together at the Large Hadron Collider in Switzerland, they create showers of new particles whose signatures are recorded by four detectors. The LHC captures 5 trillion bits of data — more information than all of the world’s libraries combined — every second. After the judicious application of filtering algorithms, more than 99 percent of those data are discarded, but the four experiments still produce a whopping 25 petabytes (25×1015 bytes) of data per year that must be stored and analyzed. That is a scale far beyond the computing resources of any single facility, so the LHC scientists rely on a vast computing grid of 160 data centers around the world, a distributed network that is capable of transferring as much as 10 gigabytes per second at peak performance. The LHC’s approach to its big data problem reflects just how dramatically the nature of computing has changed over the last decade.
Data Visualization / Infographics
Why the world’s governments are interested in creating hubs for open data Amid the tech giants and eager startups that have camped out in East London’s trendy Shoreditch neighborhood, the Open Data Institute is the rare nonprofit on the block that talks about feel-good sorts of things like “triple-bottom line” and “social and environmental value.” In fact, I first met ODI’s CEO Gavin Starks because he used to run AMEE, a startup that builds software for environmental data, and he was one of our first speakers at GigaOM’s early green conferences. But ODI, which officially launched last October with funding from the U.K. government, is a private company and philanthropy isn’t its dominant aim. ODI helps companies, entrepreneurs and governments find value in the explosion of open data, and it seems to be starting to gain commercial success like a savvy street vendor selling hot cakes. ODI CEO Gavin Starks, in front of art in the ODI offices in Shoreditch.
Simon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials. Today’s big data is noisy, unstructured, and dynamic. “How the hell do you analyze that data?” The Mathematical Shape of Big Science Data
Artwork: Tamar Cohen, Andrew J Buboltz, 2011, silk screen on a page from a high school yearbook, 8.5" x 12" Download a free chapter from Thomas H. Davenport's book Keeping Up with the Quants. When Jonathan Goldman arrived for work in June 2006 at LinkedIn, the business networking site, the place still felt like a start-up. Data Scientist: The Sexiest Job of the 21st Century
For Start-Ups, Sorting the Data Cloud Is the Next Big Thing “My smartphone produces a huge amount of data, my car produces ridiculous amounts of really valuable data, my house is throwing off data, everything is making data,” said Erik Swan, 47, co-founder of Splunk, a San Francisco-based start-up whose software indexes vast quantities of machine-generated data into searchable links. Companies search those links, as one searches Google, to analyze customer behavior in real time. Splunk is among a crop of enterprise software start-up companies that analyze big data and are establishing themselves in territory long controlled by giant business-technology vendors like Oracle and I.B.M.
How Big Data Gets Real FacebookTwitterGoogle+SaveEmailSharePrint The business of Big Data, which involves collecting large amounts of data and then searching it for patterns and new revelations, is the result of cheap storage, abundant sensors and new software. It has become a multibillion-dollar industry in less than a decade. Growing at speed like that, it is easy to miss how much remains to do before the industry has proven standards. Until then, lots of customers are probably wasting much of their money.
Big Data, Trying to Build Better Workers
"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Tears in rain: how Snapchat showed me the glory of data death
In 2011, IBM's Watson supercomputer got an unusually public proof-of-concept, competing on Jeopardy! and beating its human competitors hands-down. It was a powerful public win for IBM, and for artificial intelligence at large, but the computer at the center of all that publicity was still basically a prototype. If Watson can do this, IBM wanted to say, imagine what it can do in the real world. IBM's Watson wants to fix America's doctor shortage
Words by the Millions, Sorted by Software
Down in the Data Dumps: Researchers Inventory a World of Information
What data is being collected on you? Some shocking info
How Facebook Uses Your Data to Target Ads, Even Offline