background preloader

Institute for Creative Technologies

Institute for Creative Technologies
Related:  Interactives

Retraining Wire and Feature Editors to Be Web Curators If the wire editor and feature editor roles are becoming obsolete for print newspapers, as Steve Yelvington persuasively argues, then those editors should be retrained — or retrain themselves — as web curators. Rather than become obsolete, these editors could become essential to their news organization’s future on the web. Steve observes: On the Internet, we have no need of wire editors; if we wish to have wire content on our websites, we can plug in AP Hosted News, or run a full feed of AP Online or some similar product from another service. Feature editing faces the same problem: But the job simply doesn’t transport to digital media. Yet there is a HUGE opportunity in this shifting landscape. Instead, we have what Clay Shirky describes as “filter failure”: Here’s what the Internet did: it introduced, for the first time, post-Gutenberg economics. Local news sites may serve their readers much better by sending them to CNN, MSNBC, and NYT for non-local news, as Steve suggests.

Bracelet - The Future is Now (A) These general website terms and conditions of use (hereinafter referred to as the “General Terms and Conditions”) set forth the terms and conditions applicable to the website (hereinafter referred to as the “Site”). (B) The Site is the exclusive property of CN2P (hereinafter referred to as the “Company”). (C) The purpose of the Site is to present the project developed by the Company (the “Project”) and to enable the collection of donations through the Site in order to finance the completion of the Company’s Project (the “Donations”). 2.1 Definitions For the purposes hereof, capitalized terms shall have the meaning given thereto below, unless the context requires otherwise: “Company” means CN2P, a société par actions simplifiée incorporated under the laws of France, whose registered office is La Grande Arche Paroi Nord 92044 Paris La Défense, registered under number RCS Nanterre B 808 557 573; 2.2 Interpretation (ii) plural terms include the singular and vice versa;

Birla Institute of Scientific Research Mind Mapping in Education - MindMeister Coding Robin Open this page, allow it to access your webcam and see your face getting recognized by your browser using JavaScript and OpenCV, an "open source computer vision library". That's pretty cool! But recognizing faces in images is not something terribly new and exciting. Wouldn't it be great if we could tell OpenCV to recognize something of our choice, something that is not a face? Let's say... a banana? That is totally possible! Here's the good news: we can generate our own cascade classifier for Haar features. But now for the best of news: keep on reading! The following instructions are heavily based on Naotoshi Seo's immensely helpful notes on OpenCV haartraining and make use of his scripts and resources he released under the MIT licencse. Let's get started The first thing you need to do is clone the repository on GitHub I made for this post. git clone You'll also need OpenCV on your system. Samples How many images do we need?

GEONGRID Main Page Instructor: Howard Rheingold Stanford Winter Quarter 2014 Mondays, 11:15-2:05, Room TBA Course Description: Today’s personal, social, political, economic worlds are all affected by digital media and networked publics. Viral videos, uprisings from Cairo to Wall Street, free search engines, abundant inaccuracy and sophisticated disinformation online, indelible and searchable digital footprints, laptops in lecture halls and BlackBerries at the dinner table, twenty-something social media billionaires, massive online university courses -- it’s hard to find an aspect of daily life around the world that is not being transformed by the tweets, blogs, wikis, apps, movements, likes and plusses, tags, text messages, and comments two billion Internet users and six billion mobile phone subscriptions emit. Learning Outcomes: Diligent students will: Cultivate an ability to discern, analyze, and exert control over the way they deploy their attention.Learn to use social media tools for collaborative work.

'Ninja Run' May Be the Craziest VR Locomotion Technique Yet This is ‘Ninja Run VR’, and its yet another take on the ever present problem of user locomotion in VR experiences. But, as crazy as the concept looks, there may be some method to its madness. The question of how best to allow users of VR apps, games and experiences to move around a virtual space is still a hot topic among players and developers. Right now, those titles which require user-controlled movement through a space seem to be settling on some form of VR teleportation – i.e. the point and click approach. Now, a developer has come up with a new take on the problem, one that on the surface may well look, well, a little “batshit crazy”. “In many popular anime and cartoons, characters depicted to be moving very fast are often illustrated with their hands and arms trailing their torso in the direction they are moving,” Hall tells us, “Even though is is unrealistic, it does induce the notion that the person has a super power to move very fast.”

World in the Future - Future Starts Here may be available for purchase. Inquire today! Inquiry Form Inquire with your Facebook or LinkedIn profile, or complete this form to receive a free quote. Every big dream that became a reality had one thing in common: A solid foundation. First impressions matter.Get an email address as big as your dreams. Owner and CEO Don't be limited by a free webmail address. Visibility is the cornerstone of every business.Your dreams deserve to be seen. The leading World In Future site on the net"We're the best at what we do"Your NameFounder, CEO If they don't see you, you don't exist.

search tools DIY Hololens by FultonX I wanted to go ahead and develop Hololens apps. Here is Microsoft's live demo video: After experimenting and seeing a couple AR kickstarters, I figured out a way to develop comparable hardware... at least for prototyping. The HMD costs ~$10 and it projects any mobile phone image enlarged at 16" from the eye. I plan to add a Leap Motion, depth sensor, and/or tracking to be able to interact with virtual objects. Edit: TBD items: Google Tango tablet integration: All inclusive sensors and mobile form-factor (fastest road to victory): Without Google Tango: Depth sensor could also just be a head mounted Kinect using: Head tracking could be done with Playstation at first: Added ideas not on the real Hololens: Pupil tracking idea here:

interesting news sites