background preloader

Automation, Algorithms & Manipulation

Facebook Twitter

L’intelligence artificielle va-t-elle rester impénétrable ? Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe. Neural networks are changing the Internet.

Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe

Inspired by the networks of neurons inside the human brain, these deep mathematical models can learn discrete tasks by analyzing enormous amounts of data. They’ve learned to recognize faces in photos, identify spoken commands, and translate text from one language to another. And that’s just a start. They’re also moving into the heart of tech giants like Google and Facebook.

They’re helping to choose what you see when you query the Google search engine or visit your Facebook News Feed. Why ‘Popping’ the Social Media Filter Bubble Misses the Point. Let’s be absolutely clear: social media filter bubbles are not responsible for the election of Donald Trump.

Why ‘Popping’ the Social Media Filter Bubble Misses the Point

There are quite a few problems with this thinking. First, it draws a direct causal line between the outcome of the election and social media usage by supposing that every voter uses social media; not that every ballot cast was filled out by someone with a Facebook account, Twitter, or internet access. Second: it suggests that social media is the only mechanism by which the forces that characterized this election—misinformation, extremism, radicalization, and paranoia—proliferate. Though recent efforts have attempted to creatively illustrate the echo chambers that form when online social platforms like Facebook curate feeds that reinforce our political ideologies, social media filter bubbles should be an object of critique, but not an object of persecution.

Your Facebook feed is not the problem. Forbes Welcome. How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over — Backchannel. How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over They’re funding a new organization, OpenAI, to pursue the most advanced forms of artificial intelligence — and give the results to the public As if the field of AI wasn’t competitive enough — with giants like Google, Apple, Facebook, Microsoft and even car companies like Toyota scrambling to hire researchers — there’s now a new entry, with a twist.

How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over — Backchannel

It’s a non-profit venture called OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk, Reid Hoffman, Peter Thiel, Jessica Livingston and Amazon Web Services. We Need Algorithmic Angels. Editor’s note: Jarno M.

We Need Algorithmic Angels

Koponen is a designer, humanist and co-founder of media discovery startup Random. His passion is to explore and create audacious human-centered digital experiences. A lot has been written on how algorithms are manipulating this and that in today’s Internet. Our algorithms, ourselves. An earlier version of this essay appeared last year, under the headline “The Manipulators,” in the Los Angeles Review of Books.

Our algorithms, ourselves

Since the launch of Netscape and Yahoo twenty years ago, the story of the internet has been one of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. #Celerity: A Critique of the Manifesto for an Accelerationist Politics. Red stack attack! Algorithms, capital and the automation of the common – di Tiziana Terranova. Questo saggio è il risultato di un’attività di ricerca che coinvolge una serie di istituzioni italiane di autoformazione di ispirazione post-operaista, di studiosi e ricercatori di lingua inglese impegnati nell’analisi dei media sociali e della teoria dei media digitali e anche artisti, attivisti, lavoratori della conoscenza, precari et similia.

Red stack attack! Algorithms, capital and the automation of the common – di Tiziana Terranova

Trae spunto dal seminario che si è svolto a Londra il 20 gennaio 2014, ospitato dall’Unità di Cultura Digitale presso il Centro per gli Studi Culturali (Goldsmiths ‘ College, University of London). Il workshop è stato il risultato di un processo di riflessione e di dibattito che è iniziato all’intrno della rete Uninomade 2.0 nei primi mesi del 2013 e proseguita attraverso mailing list e siti web come Euronomade ( questo sito di Effimera, Commonware ( ) , i Quaderni di San Precario ( e altri. Why the internet of things could destroy the welfare state. On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx.

Why the internet of things could destroy the welfare state

Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department's Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists. What does the Facebook experiment teach us? — The Message. I’m intrigued by the reaction that has unfolded around the Facebook “emotion contagion” study.

What does the Facebook experiment teach us? — The Message

(If you aren’t familiar with it, read this primer.) As others have pointed out, the practice of A/B testing content is quite common. And Facebook has a long history of experimenting on how it can influence people’s attitudes and practices, even in the realm of research. An earlier study showed that Facebook decisions could shape voters’ practices. Corrupt Personalization. (“And also Bud Light.”)

Corrupt Personalization

In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention – and that this is important. In this post I’ll respond to a student question. My favorite: “Sandvig says that algorithms are dangerous, but what are the the most serious repercussions that he envisions?” What is the coming social media apocalypse we should be worried about? This is an important question because people who study this stuff are NOT as interested in this student question as they should be. And our field’s most common response to the query “what are the dangers?”