background preloader

CLEARVIEW-AI

Facebook Twitter

Clearview, Cameras, and Karen: Newly Released Documents Expose Facial Recognition Technologies Used Across Massachusetts — The Data for Justice Project. Clearview AI facial recognition company sued in California by civil liberties activists over ‘most dangerous’ database. ALAMEDA, California — Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission.

Clearview AI facial recognition company sued in California by civil liberties activists over ‘most dangerous’ database

The lawsuit, filed in Alameda County Superior Court in the San Francisco bay area, says the New York company violates California’s constitution and seeks a court order to bar it from collecting biometric information in California and requiring it to delete data on Californians. The lawsuit says the company has built “the most dangerous” facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies and has amassed a database nearly seven times larger than the FBI’s. The California lawsuit was filed by four activists and the groups Mijente and Norcal Resist. Clearview AI's Hoan Ton-That says he's stockpiling billions of our photos. Over the last month, fears about facial recognition technology and police surveillance have intensified, all thanks to Ton-That's startup, Clearview AI.

Clearview AI's Hoan Ton-That says he's stockpiling billions of our photos

But during an interview at CNN's studios in New York City last week, Ton-That didn't seem particularly fazed, saying the last few weeks were "interesting. " He demonstrated the technology and described himself as "honored" to kick off a broader conversation about facial recognition and privacy. He's eager to build a "great American company" with "the best of intentions" and wouldn't sell his product to Iran, Russia or China, he said. He claimed the technology is saving kids and solving crimes. And he said he welcomes government regulation.

But so far, Ton-That and Clearview have triggered more concerns than acclaim. Clearview. Clearview. Clearview AI deemed illegal in the EU. Clearview AI is a US company that scrapes photos from websites to create a permanent searchable database of biometric profiles.

Clearview AI deemed illegal in the EU

US authorities use the face recognition database to find further information on otherwhise unknown persons in pictures and videos. Following legal submissions by noyb, the Hamburg Data Protection Authority yesterday deemed such biometric profiles of Europeans illegal and ordered Clearview AI to delete the biometric profile of the complainant. Link to the decision by the Hamburg DPA “Right to your face”.

A Hamburg resident and member of the Chaos Computer Club, Matthias Marx, discovered that Clearview AI, a face-tracking company based in the US, had added his biometric profile to their searchable database without his knowledge. Mr. “Imagine a world where every time you are caught on video camera, systems don’t just have your picture, but can directly identify you. Clearview AI has to comply with the GDPR. Hamburg DPA did not issue pan-European order. Next Steps. Clearview AI’s biometric photo database deemed illegal in the EU.

Clearview AI is a US company that scrapes photos from websites to create a permanent searchable database of biometric profiles.

Clearview AI’s biometric photo database deemed illegal in the EU

US authorities use the face recognition database to find further information on otherwhise unknown persons in pictures and videos. Following legal submissions by noyb, the Hamburg Data Protection Authority yesterday deemed such biometric profiles of Europeans illegal and ordered Clearview AI to delete the biometric profile of the complainant. Link to the decision by the Hamburg DPA “Right to your face”. A Hamburg resident and member of the Chaos Computer Club, Matthias Marx, discovered that Clearview AI, a face-tracking company based in the US, had added his biometric profile to their searchable database without his knowledge. Mr. “Imagine a world where every time you are caught on video camera, systems don’t just have your picture, but can directly identify you.

Clearview AI has to comply with the GDPR. Hamburg DPA did not issue pan-European order. Next Steps. Clearview AI Responds to Cease-and-Desist Letters by Claiming First Amendment Right to Publicly Available Data - Harvard Journal of Law & Technology. Clearview Report of Findings February 02 2021. Onezero.medium. Have you ever had a moment of paranoia just before posting a photo of yourself (or your kid) on social media?

onezero.medium

Maybe you felt a vague sense of unease about making the photo public. Or maybe the nebulous thought occurred to you: “What if someone used this for something?” Perhaps you just had a nagging feeling that sharing an image of yourself made you vulnerable, and opened you up to some unknowable, future threat. It turns out that your fears were likely justified. Someone really has been monitoring nearly everything you post to the public internet. Podcast: The end of privacy? The spread of facial recognition. I Got My File From Clearview AI, and It Freaked Me Out. ACLU Called Clearview AI’s Facial Recognition Accuracy Study “Absurd” Obtained by BuzzFeed News / Via clearview.ai Clearview AI's website previously touted that its technology had been verified using "the ACLU's facial recognition accuracy methodology.

ACLU Called Clearview AI’s Facial Recognition Accuracy Study “Absurd”

" The ACLU pushed back on that claim and asked Clearview to remove it last month. Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, has been telling prospective law enforcement clients that a review of its software based on “methodology used by the American Civil Liberties Union” is stunningly accurate.

“The Independent Review Panel determined that Clearview rated 100% accurate, producing instant and accurate matches for every photo image in the test,” read an October 2019 report that was included as part of the company’s pitch to the North Miami Police Department. “Accuracy was consistent across all racial & demographic groups.” Digitalprivacy. By Robert Bateman Clearview AI’s biometric database was declared unlawful in Canada earlier this month, just a week after a similar decision by German regulators.

digitalprivacy

The New York-based tech firm has amassed a vast collection of more than three billion facial images by scraping publicly available data. Clearview’s algorithmic software derives “faceprints” from these images, creating a trove of biometric information that is searchable by the company’s clients, including U.S. law-enforcement agencies. In a Feb. 3 news release, announcing the outcome of a yearlong investigation, Canada’s Office of the Privacy Commissioner (OPC) concluded that Clearview’s practices represented “mass surveillance” and were “illegal.” “Canada is starting to look into the full picture of facial-recognition software uses, and Clearview is one example where many in Canada don’t like what we see,” said Victoria McIntosh, an independent privacy consultant based in Nova Scotia.

Clearview’s facial recognition tech is illegal mass surveillance, Canada privacy commissioners say. Clearview AI’s First Amendment theory threatens privacy—and free speech. What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year.

Clearview AI’s First Amendment theory threatens privacy—and free speech.

The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too. The lawsuits against Clearview are in their early stages, but there does not seem to be any dispute about the important facts.

News release: Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say. Note: A teleconference for journalists will be held this morning.

News release: Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say

See details below. MEPs furious over Commission’s ambiguity on Clearview AI scandal. The European Commission’s lack of substantial response to concerns over the use of Clearview AI technology by EU law enforcement authorities has drawn the ire of MEPs on the European Parliament’s Civil Liberties committee.

MEPs furious over Commission’s ambiguity on Clearview AI scandal

US firm Clearview provides organisations – predominantly police agencies – with a database that is able to match images of faces with over three billion other facial pictures scraped from social media sites. It has previously come under fire for its mass-harvesting of facial images from social media. On Thursday (3 September), the European Commission’s Zsuzsanna Felkai Janssen of DG Home was pressed by MEPs to provide more clarity on the concerns related to the use of the technology in Europe, after it emerged that certain police forces had been using it. This provoked a series of written questions being submitted to the Commission, from both a contingent of Renew MEPs and a separate question from GUE/NGL MEP Stelios Kouloglou. Getting the First Amendment wrong.

Think of the last time you changed your profile picture on Facebook or Instagram. When you uploaded that photo, did you assume you were agreeing to let anyone do anything they want with that photo, including putting you in a facial recognition database to track your location and every photo of you on the Web? Facial recognition company Clearview AI seems to think so. The company is bolstering its legal team to build a First Amendment argument to help justify its dubious and dangerous facial recognition business. All of our privacy hangs in the balance. Clearview AI is wrong about privacy and wrong about the First Amendment.

Get Today in Opinion in your inboxGlobe Opinion's must-reads, delivered to you every Sunday-Friday. But the word “public” is essentially meaningless in the law.