background preloader

Week 7: About algorithms and bias

Facebook Twitter

*Safiya Umoja Noble - "Just Google It": Algorithms of Oppression. Algorithms of Oppression: Safiya Umoja Noble. Safiya Umoja Noble: The Internet Unleashed: Knight Media Forum 2020. Joy Buolamwini: How I'm fighting bias in algorithms. Peter Schweizer on Google's Control. Peter Schweizer: The Algorithm Chooses for You. The Creepy Line - Full Documentary on Social Media's manipulation of society. Princeton: Biased bots: Artificial-intelligence systems echo human prejudices.

In debates over the future of artificial intelligence, many experts think of these machine-based systems as coldly logical and objectively rational.

Princeton: Biased bots: Artificial-intelligence systems echo human prejudices

But in a new study, Princeton University-based researchers have demonstrated how machines can be reflections of their creators in potentially problematic ways. Common machine-learning programs trained with ordinary human language available online can acquire the cultural biases embedded in the patterns of wording, the researchers reported in the journal Science April 14. These biases range from the morally neutral, such as a preference for flowers over insects, to discriminatory views on race and gender. Identifying and addressing possible biases in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, as in online text searches, image categorization and automated translations.

This bias about occupations can end up having pernicious, sexist effects. On Algorithmic Literacy, with Barabra Fister from Project Information Literacy — Never Gallery Ready. Show notes and related links: Information Literacy in the Age of Algorithms: Student Experiences with News and Information, and the Need for Change, Head, Alison J.; Fister, Barbara; MacMillan, Margy, Project Information Literacy - Facebook–Cambridge Analytica data scandal - Was software responsible for the financial crisis?

On Algorithmic Literacy, with Barabra Fister from Project Information Literacy — Never Gallery Ready

- Subprime Attention Crisis, Tim Hwang - What Is Artificial Intelligence? Crash Course AI #1. Algorithm Study (January 15, 2020) – Project Information Literacy. Abstract This report presents findings about how college students conceptualize the ever-changing online information landscape, and navigate volatile and popular platforms that increasingly employ algorithms to shape and filter content.

Algorithm Study (January 15, 2020) – Project Information Literacy

Researchers conducted 16 focus groups with 103 undergraduates and interviewed 37 faculty members to collect qualitative data from eight US colleges and universities from across the country. Findings suggest that a majority of students know that popular websites, such as Google, YouTube, Instagram, and Facebook, use algorithms to collect massive amounts of their personal data, but still find sites too useful to abandon.

Many are indignant about websites that mine their clicks to sell them products, but resigned to the powers of an unregulated media environment. Algorithmic Bias and Fairness: Crash Course AI #18. Cats Vs Dogs? Let's make an AI to settle this: Crash Course Ai #19. Brookings: Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Introduction The private and public sectors are increasingly turning to artificial intelligence (AI) systems and machine learning algorithms to automate simple and complex decision-making processes.

Brookings: Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms

The mass-scale digitization of data and the emerging technologies that use them are disrupting most economic sectors, including transportation, retail, advertising, and energy, and other areas. Facing the algorithms. MANY PRIVACY advocates are alarmed by the growing reach of facial recognition.

Facing the algorithms

Early last year, The New York Times ran a series of articles about Clearview AI, a small company that until then had remained largely unknown. Founder Hoan Ton-That marketed the software by offering trial versions in “cold call” emails to law enforcement personnel. Reaction to the Times coverage was swift. In New Jersey, Attorney General Gurbir S. Grewal subsequently instructed all of the state’s prosecutors to ban police use of the Clearview AI app. Biased Algorithms Learn From Biased Data: 3 Kinds Biases Found In AI Datasets.

“Imagine a scenario in which self-driving cars fail to recognize people of color as people—and are thus more likely to hit them—because the computers were trained on data sets of photos in which such people were absent or underrepresented,” Joy Buolamwini, a computer scientist and researcher at MIT, told Fortune in a recent interview.

Biased Algorithms Learn From Biased Data: 3 Kinds Biases Found In AI Datasets

Buolamwini’s research revealed that facial recognition software from tech giants Microsoft, IBM and Amazon, among others could identify lighter-skinned men but not darker-skinned women. The Truth About Algorithms. Algorithmic Justice: Race, Bias, and Big Data. The bigot in the machine – barbara fister. The New York Technical Services Librarians, an organization that has been active since 1923 – imagine all that has happened in tech services since 1923!

the bigot in the machine – barbara fister

– invited me to give a talk about bias in algorithms. They quickly got a recording up on their site and I am, more slowly, providing the transcript. Thanks for the invite and all the tech support, NYTSL! From viral conspiracies to exam fiascos, algorithms come with serious side effects. Will Thursday 13 August 2020 be remembered as a pivotal moment in democracy’s relationship with digital technology?

From viral conspiracies to exam fiascos, algorithms come with serious side effects

Because of the coronavirus outbreak, A-level and GCSE examinations had to be cancelled, leaving education authorities with a choice: give the kids the grades that had been predicted by their teachers, or use an algorithm. They went with the latter. The outcome was that more than one-third of results in England (35.6%) were downgraded by one grade from the mark issued by teachers. This meant that a lot of pupils didn’t get the grades they needed to get to their university of choice. More ominously, the proportion of private-school students receiving A and A* was more than twice as high as the proportion of students at comprehensive schools, underscoring the gross inequality in the British education system. What happened next was predictable but significant.

Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them (find in Rutgers Libraries) Andreas Ekström: The moral bias behind your search results. UNCG Libraries ULVLC: "Algorithms of Oppression" Algorithms of Oppression (ULVLC) - Google Slides. Race, Technology, and Algorithmic Bias. YouTube Algorithms: How To Avoid the Rabbit Hole. Skip to Main Content Use one of the services below to sign in to PBS: You've just tried to add this video to My List.

YouTube Algorithms: How To Avoid the Rabbit Hole

But first, we need you to sign in to PBS using one of the services below. You've just tried to add this show to My List. But first, we need you to sign in to PBS using one of the services below. By creating an account, you acknowledge that PBS may share your information with our member stations and our respective service providers, and that you have read and understand the Privacy Policy(opens in new window) and Terms of Use(opens in new window). You have the maximum of 100 videos in My List. We can remove the first video in the list to add this one. Edit My List You have the maximum of 100 shows in My List. We can remove the first show in the list to add this one. Tech Bias and Algorithmic discrimination. The Best Algorithms Still Struggle to Recognize Black Faces. AI Fairness 360. AI can be sexist and racist — it’s time to make it fair.

By James Zou and Londa Schiebinger When Google Translate converts news articles written in Spanish into English, phrases referring to women often become ‘he said’ or ‘he wrote’.

AI can be sexist and racist — it’s time to make it fair

Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant. These are just a few of the many examples uncovered so far of artificial intelligence (AI) applications systematically discriminating against specific populations.