background preloader

Clearview

Facebook Twitter

Clearview AI Responds to Cease-and-Desist Letters by Claiming First Amendment Right to Publicly Available Data - Harvard Journal of Law & Technology. I Got My File From Clearview AI, and It Freaked Me Out. Have you ever had a moment of paranoia just before posting a photo of yourself (or your kid) on social media?

I Got My File From Clearview AI, and It Freaked Me Out

Maybe you felt a vague sense of unease about making the photo public. Or maybe the nebulous thought occurred to you: “What if someone used this for something?” Perhaps you just had a nagging feeling that sharing an image of yourself made you vulnerable, and opened you up to some unknowable, future threat. It turns out that your fears were likely justified. Someone really has been monitoring nearly everything you post to the public internet.

ACLU Called Clearview AI’s Facial Recognition Accuracy Study “Absurd” ACLU Blasts Clearview's Facial Recognition Accuracy Claims. The American Civil Liberties Union earlier this week criticized facial recognition tool developer Clearview for making misleading claims about the accuracy of its product.

ACLU Blasts Clearview's Facial Recognition Accuracy Claims

Clearview apparently has been telling law enforcement agencies that its technology underwent accuracy testing modeled on the ACLU's 2018 test of Amazon's Rekognition facial recognition tool. For that test, the ACLU simulated the way law enforcement used Rekognition in the field, matching photos of all 535 members of the United States Congress against a database it built of 25,000 publicly available mugshots of arrestees. Rekognition incorrectly matched 28 lawmakers with arrestees' photos. The false matches disproportionately featured lawmakers of color. Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes.

Obtained by BuzzFeed News As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes

A document obtained via a public records request reveals that Clearview has been touting a “rapid international expansion” to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses. The document, part of a presentation given to the North Miami Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality. Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. Class action suit against Clearview AI cites Illinois law that cost Facebook $550M – TechCrunch. Just two weeks ago Facebook settled a lawsuit alleging violations of privacy laws in Illinois (for the considerable sum of $550 million).

Class action suit against Clearview AI cites Illinois law that cost Facebook $550M – TechCrunch

Now controversial startup Clearview AI, which has gleefully admitted to scraping and analyzing the data of millions, is the target of a new lawsuit citing similar violations. Clearview made waves earlier this year with a business model seemingly predicated on wholesale abuse of public-facing data on Twitter, Facebook, Instagram and so on. If your face is visible to a web scraper or public API, Clearview either has it or wants it and will be submitting it for analysis by facial recognition systems.

Just one problem: That’s illegal in Illinois, and you ignore this to your peril, as Facebook found. Not only that, but this biometric data has been licensed to many law enforcement agencies, including within Illinois itself. You can read the text of the complaint here. Getting the First Amendment wrong. Think of the last time you changed your profile picture on Facebook or Instagram.

Getting the First Amendment wrong

When you uploaded that photo, did you assume you were agreeing to let anyone do anything they want with that photo, including putting you in a facial recognition database to track your location and every photo of you on the Web? Facial recognition company Clearview AI seems to think so. The company is bolstering its legal team to build a First Amendment argument to help justify its dubious and dangerous facial recognition business. All of our privacy hangs in the balance. Clearview AI is wrong about privacy and wrong about the First Amendment. Get Today in Opinion in your inboxGlobe Opinion's must-reads, delivered to you every Sunday-Friday. But the word “public” is essentially meaningless in the law.

Clearview AI’s First Amendment theory threatens privacy—and free speech. What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year.

Clearview AI’s First Amendment theory threatens privacy—and free speech.

The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected.

Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too. The lawsuits against Clearview are in their early stages, but there does not seem to be any dispute about the important facts.

MEPs furious over Commission’s ambiguity on Clearview AI scandal. The European Commission’s lack of substantial response to concerns over the use of Clearview AI technology by EU law enforcement authorities has drawn the ire of MEPs on the European Parliament’s Civil Liberties committee.

MEPs furious over Commission’s ambiguity on Clearview AI scandal

US firm Clearview provides organisations – predominantly police agencies – with a database that is able to match images of faces with over three billion other facial pictures scraped from social media sites. It has previously come under fire for its mass-harvesting of facial images from social media. On Thursday (3 September), the European Commission’s Zsuzsanna Felkai Janssen of DG Home was pressed by MEPs to provide more clarity on the concerns related to the use of the technology in Europe, after it emerged that certain police forces had been using it. Clearview AI’s biometric photo database deemed illegal in the EU. Clearview AI is a US company that scrapes photos from websites to create a permanent searchable database of biometric profiles.

Clearview AI’s biometric photo database deemed illegal in the EU

US authorities use the face recognition database to find further information on otherwhise unknown persons in pictures and videos. Following legal submissions by noyb, the Hamburg Data Protection Authority yesterday deemed such biometric profiles of Europeans illegal and ordered Clearview AI to delete the biometric profile of the complainant. Link to the decision by the Hamburg DPA “Right to your face”. A Hamburg resident and member of the Chaos Computer Club, Matthias Marx, discovered that Clearview AI, a face-tracking company based in the US, had added his biometric profile to their searchable database without his knowledge.

Mr. “Imagine a world where every time you are caught on video camera, systems don’t just have your picture, but can directly identify you. Clearview AI has to comply with the GDPR. Hamburg DPA did not issue pan-European order. Next Steps. Clearview’s facial recognition tech is illegal mass surveillance, Canada privacy commissioners say. Digitalprivacy. By Robert Bateman Clearview AI’s biometric database was declared unlawful in Canada earlier this month, just a week after a similar decision by German regulators.

digitalprivacy

The New York-based tech firm has amassed a vast collection of more than three billion facial images by scraping publicly available data. Clearview’s algorithmic software derives “faceprints” from these images, creating a trove of biometric information that is searchable by the company’s clients, including U.S. law-enforcement agencies. In a Feb. 3 news release, announcing the outcome of a yearlong investigation, Canada’s Office of the Privacy Commissioner (OPC) concluded that Clearview’s practices represented “mass surveillance” and were “illegal.” “Canada is starting to look into the full picture of facial-recognition software uses, and Clearview is one example where many in Canada don’t like what we see,” said Victoria McIntosh, an independent privacy consultant based in Nova Scotia. Podcast: The end of privacy? The spread of facial recognition. Clearview Report of Findings February 02 2021. Onezero.medium. Clearview AI Responds to Cease-and-Desist Letters by Claiming First Amendment Right to Publicly Available Data - Harvard Journal of Law & Technology.

News release: Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say. Note: A teleconference for journalists will be held this morning.

News release: Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say

See details below. February 3, 2021 – Technology company Clearview AI’s scraping of billions of images of people from across the Internet represented mass surveillance and was a clear violation of the privacy rights of Canadians, an investigation has found. The joint investigation by the Office of the Privacy Commissioner of Canada, the Commission d'accès à l'information du Québec, the Office of the Information and Privacy Commissioner for British Columbia and the Office of the Information and Privacy Commissioner of Alberta, concluded that the New-York-based technology company violated federal and provincial privacy laws.