background preloader

The Social-Network Illusion That Tricks Your Mind

The Social-Network Illusion That Tricks Your Mind
One of the curious things about social networks is the way that some messages, pictures, or ideas can spread like wildfire while others that seem just as catchy or interesting barely register at all. The content itself cannot be the source of this difference. Instead, there must be some property of the network that changes to allow some ideas to spread but not others. Today, we get an insight into why this happens thanks to the work of Kristina Lerman and pals at the University of Southern California. These people have discovered an extraordinary illusion associated with social networks which can play tricks on the mind and explain everything from why some ideas become popular quickly to how risky or antisocial behavior can spread so easily. Network scientists have known about the paradoxical nature of social networks for some time. This comes about because the distribution of friends on social networks follows a power law. Here’s an analogy. Two versions of this setup are shown above. Related:  Social NetworksSociologieTech

Facebook Study: Users, More Than Algorithms, Choose What News to See Ever wonder how much news Facebook’s algorithm may be sorting out of your News Feed that you don’t agree with politically? Not much, the social network says. Facebook studied millions of its most political users and determined that while its algorithm tweaks what you see most prominently in your feed, you’re the one really limiting how much news and opinion you take in from people of different political viewpoints. In an effort to explore how people consume news shared by friends of different ideological leanings, Facebook’s researchers pored over millions of URLs shared by its U.S.-based users who identify themselves in their profiles as politically liberal or conservative. The work, which sheds more light on how we glean information from our ever-growing, technologically enhanced tangles of social connections, was published in a paper in Science on Thursday. The researchers also looked at the impact of Facebook’s News Feed ranking algorithm on the kind of news you see.

« Daesh nous empêche de voir que la question majeure est politique » par Pauline Graulle | Politis Politis : Comment analysez-vous ce qu’il s’est passé à Nice la semaine dernière ? Roland Gori : La prudence serait de dire qu’on ne sait pas. Que l’on a besoin de temps pour préciser les données à recueillir par des enquêtes, et de temps pour une analyse multidimensionnelle mobilisant la pensée. Nous avons besoin de temps pour penser ce qui nous arrive, et comment nous en sommes arrivés là. Nous avons besoin de comprendre ce qui rapproche chacun de ces meurtres de masse et ce qui les différencie les uns des autres. Globalement, nous réagissons trop vite. Quelle est la responsabilité des médias ? Les médias ont une grande responsabilité dans cette affaire : ils participent à la « star académisation » de passages à l’acte criminel, pour certains immotivés – au sens quasi-psychiatrique du terme – réalisés par des personnalités plus ou moins pathologiques n’ayant aucun rapport personnel avec leurs victimes. Qu’avez-vous pensé de la réaction des (autres) politiques ? Alors que faire ? Et sinon ?

8 Cities That Show You What the Future Will Look Like Cities used to grow by accident. Sure, the location usually made sense—someplace defensible, on a hill or an island, or somewhere near an extractable resource or the confluence of two transport routes. But what happened next was ad hoc. The people who worked in the fort or the mines or the port or the warehouses needed places to eat, to sleep, to worship. At least, that’s the way things worked for most of human history. So let’s jump to now: A century, plus or minus, after human beings started putting their minds toward designing cities as a whole, things are getting good. In this year’s design issue, we’re telling the stories of some of those projects, from the detail of a new streetlight to a sacred city in flux, from masterful museums to infrastructure made for bikes (and the algorithms that run it all).

German nuclear plant infected with computer viruses, operator says Study: People Who Overshare on Facebook Just Want to Belong — The Atlantic Some people find it easier to be their "true selves" online, but posting too much on Facebook doesn't get users the attention they seek. Every time you cringe, roll your eyes, and mute an annoying friend on Facebook for oversharing, you could be invalidating someone who just wants to belong. A study, conducted by Gwendolyn Seidman of Albright College and published in Computers in Human Behavior, examines how people use Facebook to express their “true selves.” The true self is a concept first named in 2002—the idea that we possess qualities we’d like to be recognized for, but that we normally find ourselves unable to express in day-to-day life. Perhaps he finds it easier to be nice to people online, though. On many corners of the Internet—comment sections, forums, even Tumblr and Twitter to some degree—interactions take place mostly with strangers. Unfortunately, the catharsis of posting those sad quotes may only make people lonelier in the end.

Etats-Unis: vidéos et réseaux sociaux changent la perception du maintien de l'ordre sur Orange Actualités Etats-Unis: les derniers instants de victimes sont captés dans des vidéos diffusées aussitôt sur les réseaux sociaux C'est devenu presque courant: un homme ou une femme noirs meurent sous les balles de la police, leurs derniers instants sont captés dans des vidéos diffusées aussitôt sur les réseaux sociaux, alimentant les accusations de préjugés ou de brutalités par la police américaine. "Ce que l'on qualifie de brutalités ou mauvais comportements par la police est matière à interprétation, et jusqu'à présent c'étaient les services de police et les gros bonnets" qui s'en chargeaient, explique Christopher Schneider, professeur de sociologie à la Brandon University au Canada et auteur du livre "Policing and Social Media" (le maintien de l'ordre et les réseaux sociaux), sorti cette année. "Mais maintenant ces vidéos sont mises en ligne immédiatement après l'incident et les brutalités peuvent être jugées par nous tous", souligne-t-il. M. - 'Du côté des gens bien' -

This Artwork Is Probably The Most Accurate (And Scary) Portrayal Of Modern Life We’ve Ever Seen It's not nice, but it's certainly close to the mark... Steve Cutts is a London-based illustrator and animator who uses powerful images to criticize the sad state of society. Greed, environmental destruction, junk food and TV consumption, smartphone addiction and the exploitation of animals are all issues that have inspired his work. Cutts worked in the corporate world before choosing to go freelance, and his vitriol for the rat race really shows (especially here…) Cutts used to work at an advertising agency with global brand clients including Coca Cola, Google, Reebok, Magners, Kellogg’s, Virgin, 3, Nokia, Sony, Bacardi and Toyota. His illustrations capture all the stress, despair and frustration of our dog-eat-dog world, one in which we are persuaded to consume sh*t and destroy the planet in order to keep the corporate wheels turning. It’s true that Cutts’s art is depressing, but only because his images are so close to the truth. Arrrgh! Jessica and Roger relax at home The Fatcat ‘The trap’

Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter | Technology Microsoft’s attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. The company launched a verified Twitter account for “Tay” – billed as its “AI fam from the internet that’s got zero chill” – early on Wednesday. The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft’s technology and research and Bing teams to “experiment with and conduct research on conversational understanding”. “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said. “The more you chat with Tay the smarter she gets.” But it appeared on Thursday that Tay’s conversation extended to racist, inflammatory and political statements. One Twitter user has also spent time teaching Tay about Donald Trump’s immigration plans. Others were not so successful.

Related: