background preloader

Deep Fake literacy

Facebook Twitter

Deep Fake News. 5 Ways Writers Use Misleading Graphs To Manipulate You [INFOGRAPHIC] In this post-truth era, graphs are being used to skew data and spin narrative like never before. Especially with the velocity at which some of these topics spread across social media. All it takes is a single graph from a less-than-reputable source, blasted out to a list of followers, to spread a false narrative around the world. We have already seen this happen many times during the COVID-19 response, which is why we added a new section featuring a few of those misleading graphs!

Now the data doesn’t even have to be bad–it could just be presented in a misleading way. I mean, there is a whole Wikipedia page, Reddit community, and hundreds of articles about how graphs can be used to misinform readers. Now, I can’t make these data-skewing creators stop, but I can help you spot these misleading graphs when they crop up. Not a designer? Also, following data visualization best practices ensures that your graphs are always clear and understandable. Use the links below to jump to each section: 1. Media Manipulation Casebook.

Is that video real? Fallacies // Purdue Writing Lab. Summary: This resource covers using logic within writing—logical vocabulary, logical fallacies, and other types of logos-based reasoning.

Fallacies // Purdue Writing Lab

Fallacies are common errors in reasoning that will undermine the logic of your argument. Fallacies can be either illegitimate arguments or irrelevant points, and are often identified because they lack evidence that supports their claim. Avoid these common fallacies in your own arguments and watch for them in the arguments of others. Slippery Slope: This is a conclusion based on the premise that if A happens, then eventually through a series of small steps, through B, C,..., X, Y, Z will happen, too, basically equating A and Z.

If we ban Hummers because they are bad for the environment eventually the government will ban all cars, so we should not ban Hummers. In this example, the author is equating banning Hummers with banning all cars, which is not the same thing. Infographics Lie. Here’s How To Spot The B.S. A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women. Getty Images / Telegram / WIRED Pornographic deepfakes are being weaponised at an alarming scale with at least 104,000 women targeted by a bot operating on the messaging app Telegram since July.

A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women

The bot is used by thousands of people every month who use it to create nude images of friends and family members, some of whom appear to be under the age of 18. The still images of nude women are generated by an AI that ‘removes’ items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. Some of the images produced by the bot are glitchy but many could pass for genuine. Trusting video in a fake news world. These tools represent the latest weapons in the arsenal of fake news creators – ones far easier to use for the layman than those before.

Trusting video in a fake news world

While the videos produced by these tools may not presently stand up to scrutiny by forensics experts, they are already good enough to fool a casual viewer and are only getting better. - The Washington Post. How Should Countries Tackle Deepfakes? What are deepfakes?

How Should Countries Tackle Deepfakes?

Deepfakes are hyperrealistic video or audio recordings, created with artificial intelligence (AI), of someone appearing to do or say things they actually didn’t. The term deepfake is a mash-up of deep learning, which is a type of AI algorithm, and fake. How do they work? The algorithm underpinning a deepfake superimposes the movements and words of one person onto another person. Given example videos of two people, an impersonator and a target, the algorithm generates a new synthetic video that shows the targeted person moving and talking in the same way as the impersonator. Charlotte Stanton Charlotte Stanton was the inaugural director of the Silicon Valley office of the Carnegie Endowment for International Peace as well as a fellow in Carnegie’s Technology and International Affairs Program.

More > How easy are they to make? Until recently, only special effects experts could make realistic-looking and -sounding fake videos. Internet Companies Prepare to Fight the ‘Deepfake’ Future. Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw. A doctored, phony image of President Barack Obama shaking hands with President Hassan Rouhani of Iran.

Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw

A real photograph of a Muslim girl at a desk doing her homework with Donald J. Trump looming in the background on television. It is not always easy to tell the difference between real and fake photographs. But the pressure to get it right has never been more urgent as the amount of false political content online continues to rise. On Tuesday, Jigsaw, a company that develops cutting-edge tech and is owned by Google’s parent, unveiled a free tool that researchers said could help journalists spot doctored photographs — even ones created with the help of artificial intelligence.

Jigsaw, known as Google Ideas when it was founded, said it was testing the tool, called Assembler, with more than a dozen news and fact-checking organizations around the world. The tool is meant to verify the authenticity of images — or show where they may have been altered. A deepfake pioneer says 'perfectly real' manipulated videos are just 6 months away.

Teaching fact vs. fiction when seeing is no longer believing. A Deeper Look Into The Life of An Impressionist.