L'éthique des algorithmes en santé, futur casse-tête du régulateur ? Nées avec l'informatique et les travaux d'Alan Turing dans les années 1950, les technologies d'intelligence artificielle n'ont jamais semblé si proches de transformer le quotidien des êtres humains, suscitant à la fois des craintes et de grands espoirs de progrès.
En témoigne la forte présence de la thématique IA dans le débat public depuis plusieurs mois. En parallèle au rapport du député Cédric Villani (LREM, Essonne) sur le sujet, et de l'annonce en mars d'un "Plan pour l'intelligence artificielle" faisant la part belle au secteur de la santé (voir dépêche du 30 mars 2018), les technologies d'IA ont été inscrites au calendrier des états généraux de la bioéthique qui se sont tenus entre janvier et mai 2018.
"Les tests génomiques et les séquençages du génome vont aider à appréhender un traitement et la façon d'y réagir, mais ces tests soulèvent des questions éthiques: quelle qualité des données? Quelle confidentialité et sécurité pour ces données? Amp.abc.net. Updated about 3 hours agoMon 20 Aug 2018, 10:52pm.
China’s Face-Scanning Craze. Dystopia starts with 23.6 inches of toilet paper.
That’s how much the dispensers at the entrance of the public restrooms at Beijing’s Temple of Heaven dole out in a program involving facial-recognition scanners—part of the president’s “Toilet Revolution,” which seeks to modernize public toilets. How do data come to matter? Living and becoming with personal data - Deborah Lupton, 2018. Algorithms that Remember: Model Inversion Attacks and Data Protection Law by Michael Veale, Reuben Binns, Lilian Edwards. Discrimination and Privacy in the Information Society - Data Mining and Profiling in Large Databases. Kashmir Hill and Surya Mattu: What your smart devices know (and share) about you.
Data-driven discrimination: a new challenge for civil society. Data-driven technologies have been a transformative force in society.
However, while such innovations are often viewed as a positive development, discriminatory biases embedded in these technologies can serve to compound problems for society’s more vulnerable groups. Cities Are Watching You—Urban Sciences Graduates Watch Back. Human rights ignored in Smart Cities Mission: civil society report. In 2017, forced evictions and demolitions of homes have been documented in 32 of the cities participating in the Smart Cities Mission, according to an analysis of the government’s flagship project by the Housing and Land Rights Network (HRLN).
Some of these evictions were directly linked to Smart City projects, while others were carried out for reasons ranging from “city beautification” to “slum clearance”, said HRLN executive director Shivani Chaudhry. “The question must be asked: who are these smart cities meant to benefit? Will marginalised communities find themselves in an even more vulnerable place due to these projects?” Ms Chaudhry said, while launching the report this week. Even as the Smart Cities Mission entered its fourth year, and announced its 100th city — Shillong — civil society groups are analysing the government’s flagship urban scheme through a human rights lens.
The report, titled India’s Smart Cities Mission: Smart For Whom? 5 Questions to Ask About Artificial Intelligence – Khan's Blog. In 1956, John McCarthy, the father of Artificial Intelligence (AI), brought together expert thinkers from multiple disciplines to explore how machines could “mimic” certain human traits.
These expert thinkers came from the fields of Computer Science, Engineering, Logic, Mathematics and Psychology and wanted to find out how machines could: Use languageForm abstractions and conceptsImprove problems reserved for humansImprove themselves Today, the field of AI also draws from the fields of Linguistics, Philosophy, Statistics, Economics and others. Due to the advancements and inclusion of various fields, the definition of what AI is has also evolved. Algorithms that Remember: Model Inversion Attacks and Data Protection Law by Michael Veale, Reuben Binns, Lilian Edwards. Algorithms and artificial intelligence: CNIL’s report on the ethical issues. Big-data-ai-ml-and-data-protection. Privacy Scholarship Reporter – Issue 2 - Future of Privacy ForumFuture of Privacy Forum.
The Data That Turned the World Upside Down - Motherboard. An earlier version of this story appeared in Das Magazin in December.
On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. Algorithmes : les administrés vont pouvoir connaître les règles des décisions les concernant. Au J.O. du jour a été publié l’un des décrets d’application de la loi Lemaire.
Il porte sur les modalités de communication des règles définissant les traitements algorithmiques utilisés par l’administration pour prendre des décisions individuelles. Un texte initialement prévu pour la fin 2016. La loi Lemaire a programmé la possibilité pour les administrations de communiquer aux citoyens « les règles » et « principales caractéristiques » de mise en œuvre des traitements algorithmiques servant à prendre des décisions individuelles les concernant. Pour alerter les administrés, il est prévu qu’une « mention explicite » sera préalablement apposée, histoire de les informer de leur capacité à faire valoir leur nouveau droit. Finalité et modalité d'exercice du droit de communication Ce matin au Journal officiel, a été publié le décret prévoyant la mise en musique de ce nouveau droit.
Big Data: EP calls for better protection of fundamental rights and privacy. Strengthened transparency of algorithms, special attention to data used for law enforcement and more investment in digital literacy needed to safeguard fundamental rights in the digital era, MEPs say in a non-legislative resolution passed on Tuesday.
The non-legislative resolution drafted by Ana Gomes (S&D, PT) on the fundamental rights implications on Big data looks at how the increasing use of Big data impacts on fundamental rights, namely privacy and data protection. Big data is growing by 40% per year and has the potential to bring undeniable benefits and opportunities for citizens, businesses and governments, but also entails significant risks with regard to the protection of fundamental rights as guaranteed by the EU Charter and Union law. The resolution stresses the need to avoid discrimination based on the use of such data, including in law enforcement, as well as the need to ensure security of data. Smart TVs, Microwaves: Stop Hackers From Spying On You. From televisions to toasters, all kinds of devices are getting hooked up to the Internet.
That's bringing convenience, like air conditioning systems that can be activated remotely while you're on your way home from work. But it's also bringing new privacy concerns, as anything connected to the Internet tends to attract the attention of hackers. Uk.businessinsider. Announcing A New Open-Source Privacy Standard For The Internet Of Things – Co... At Consumerist, consumer privacy and data security have been growing areas of coverage over the past few years. We regularly write about policies, corporate and government alike, that either threaten or help safeguard your privacy. An Error Occurred Setting Your User Cookie.
Computer-based personality judgments are more accurate than those made by humans. Author Affiliations Edited by David Funder, University of California, Riverside, CA, and accepted by the Editorial Board December 2, 2014 (received for review September 28, 2014) Significance This study compares the accuracy of personality judgment—a ubiquitous and important social-cognitive activity—between computer models and humans. On What Facebook Knows – An Interview with the Man Behind Facebook’s Personality Experiment. Earlier this year, researchers from The University of Cambridge and Stanford University released a report which looked at how people’s Facebook activity could be used as an indicative measure of their psychological profile.
What they found was pretty amazing – using the results of a 100 question psychological study, which had been completed by more than 86,000 participants through an app and mapped alongside their respective Facebook likes, the researchers developed a system which could then, based on Facebook activity alone, determine a person’s psychological make-up more accurately than their friends, their family – better even than their partners.
Big data and psychographic profiling helped Donald Trump win the US presidential election. Contrary to expectations from polls, Donald Trump secured a surprise victory in the 2016 US Presidential Election. After the victory, there has been a scramble to explain the results. Echo chambers on social media platforms where biased misinformation were circulated were speculated to have contributed to the surprise win. Barack Obama accused Facebook of allowing fake news stories to spread on the platform, which helped Trump win. Facebook denied the allegations, but took strong steps to restrict the spread of misinformation on the social network.
Donald Trump agreed that social networking platforms such as Facebook and Twitter did help him win the elections. The Data That Turned the World Upside Down - Motherboard. Did Cambridge Analytica influence the Brexit vote and the US election? On Saturday 23 June 2012, David Miller received an angry email. Miller, a professor of sociology at Bath University, runs something called the Powerbase website, which records the political and business connections of influential people. Watchdog to launch inquiry into misuse of data in politics. This article is the subject of a legal complaint on behalf of Cambridge Analytica LLC and SCL Elections Limited. The UK’s privacy watchdog is launching an inquiry into how voters’ personal data is being captured and exploited in political campaigns, cited as a key factor in both the Brexit and Trump victories last year. Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. Profiling the European Citizen - Cross-Disciplinary.
Big data and privacy.