
Technologies numériques et apprentissages 1 Nous remercions les contributeurs et contributrices à ce numéro, pour leurs propositions de texte o (...) 1Les technologies numériques sont par définition des technologies de l’information et de la communication1. Elles sont en cela fréquemment désignées comme des ressources privilégiées pour l’apprentissage. Internet et les outils informatiques et numériques qui l’accompagnent sont identifiés de longue date par la sociologie des techniques comme propices à la création et au partage de connaissance à la fois au sein de collectifs de spécialistes socialement homogènes, et à l’interface de mondes sociaux hétérogènes (Eric Dagiral & Peerbaye, 2016 ; Heaton & Millerand, 2013 ; Méadel, 2010). 2L’un des points souvent ignoré par les discours optimistes comme pessimistes sur les potentiels pédagogiques des technologies numériques est que leur usage suppose lui-même d’entrer dans un processus d’apprentissage. 2. 1. 2.2. 3.1. 3.2.
Algorithmic impact assessment: a case study in healthcare | Ada Lovelace Institute It is important to emphasise that this proposed AIA process is not a replacement for the above governance and regulatory frameworks. NMIP applicants expecting to build or validate a product from NMIP data are likely to go on to complete (or in some cases, have already completed), the processes of product registration and risk classification, and are likely to have experience working with frameworks such as the ‘Guide to good practice’ and NICE evidence standards framework. Similarly, DPIAs are widely used across multiple domains because of their legal basis and are critical in healthcare, where use of personal data is widespread across different research and clinical settings. As Table 1 shows, we recommend to the NHS AI Lab that NMIP applicant teams should be required to submit a copy of their DPIA as part of the data-access process, as it specifically addresses data protection and privacy concerns around the use of NMIP data, which have not been the focus of the AIA process. Summary 1.
Tags As Kids Kickstart The Metaverse, Is Public Service Media Ready? - PSM Campaign - The Children’s Media Foundation (CMF) Our Children’s Future: Does Public Service Media Matter? David Kleeman draws on his extensive global research to highlight the challenges and opportunities for regulators and children's media producers as they prepare for changes in media habits we have only begun to imagine. "[The metaverse is] arguably as big a shift in online communications as the telephone or the internet." David Baszucki, CEO, Roblox Any debate on the future of public service media for children cannot assume that what children are doing now is what they will be doing five years from now. The former terms had a few challenges. There is, as yet, no true metaverse – Roblox and Fortnite come closest – but there is a thriving editorial exchange and digital industry in defining and building the variety of elements that ultimately will click together like a jigsaw puzzle, making a seamless whole. Who will define and create the “public service” content and experiences? What Is A Metaverse? Why Should I Care? Brands Together
Artifice and Intelligence. Executive Director Emily Tucker… | by Center on Privacy & Technology | Center on Privacy & Technology at Georgetown Law | Mar, 2022 | Medium Words matter. Starting today, the Privacy Center will stop using the terms “artificial intelligence,” “AI,” and “machine learning” in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities. I will try to explain what is at stake for us in this decision with reference to Alan Turing’s foundational paper, Computing Machinery and Intelligence, which is of course most famous for its description of what the paper itself calls “the imitation game,” but what has come to be known popularly as “the Turing test.” The imitation game involves two people (one of whom takes the role of the “interrogator”) and a computer. “…in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 percent, chance of making the right identification after five minutes of questioning.”
Etude Diplomeo : comment l'IA impacte les études et l'orientation des jeunes Français La jeune génération a grandi avec internet et les réseaux sociaux. Alors que l’IA générative s’impose elle aussi dans son quotidien, Diplomeo, la plateforme dédiée à l’orientation dans l’enseignement supérieur du groupe HelloWork, a voulu explorer le rapport des jeunes Français de 16 à 25 ans à l’IA, ainsi que son impact sur leurs choix éducatifs et professionnels. Les résultats de son enquête soulignent la place croissante de l’IA dans leurs études mais aussi en matière d’orientation. HelloWork Group est un acteur français de l’emploi, du recrutement et de la formation. Parmi les jeunes interrogés, 79% utilisent un outil d’IA pour leurs études ou leur orientation : 55% d’entre eux au moins une fois par mois, 25% toutes les semaines et 21% quotidiennement. L’IA est perçue comme un outil simplifiant les tâches quotidiennes fastidieuses. Dans le cadre de leurs études, 8 jeunes sur 10 qui utilisent l’IA en font usage pour leurs cours. Un impact modéré sur l’orientation Crédit : Diploméo
The Algorithmic Accountability Act could hold tech companies responsible. We’ve seen again and again the harmful, unintended consequences of irresponsibly deployed algorithms: risk assessment tools in the criminal justice system amplifying racial discrimination, false arrests powered by facial recognition, massive environmental costs of server farms, unacknowledged psychological harm from social media interactions, and new, sometimes-insurmountable hurdles in accessing public services. These actual harms are egregious, but what makes the current regime hopeless is that companies are incentivized to remain ignorant (or at least claim they to be) about the harms they expose us to, lest they be found liable. Many of the current ideas for regulating large tech companies won’t address this ignorance or the harms it causes. Now, in a significant step forward, lawmakers are increasingly building impact assessments into draft legislation. However, federal agencies need to set clear expectations for developers regarding what an effective impact assessment looks like.
A college kid created a fake, AI-generated blog. It reached #1 on Hacker News. | MIT Technology Review GPT-3 is OpenAI’s latest and largest language AI model, which the San Francisco–based research lab began drip-feeding out in mid-July. In February of last year, OpenAI made headlines with GPT-2, an earlier version of the algorithm, which it announced it would withhold for fear it would be abused. The decision immediately sparked a backlash, as researchers accused the lab of pulling a stunt. By November, the lab had reversed position and released the model, saying it had detected “no strong evidence of misuse so far.” The lab took a different approach with GPT-3; it neither withheld it nor granted public access. Instead, it gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year. Porr submitted an application. The trick to generating content without the need for much editing was understanding GPT-3’s strengths and weaknesses. Porr plans to do more experiments with GPT-3.
Meta Wouldn’t Tell Us How It Enforces Its Rules In VR, So We Ran A Test To Find Out BuzzFeed News; Sven Hoppe / picture alliance via Getty Images Facebook said it would be different this time. Announcing the company’s rebranding to Meta, CEO Mark Zuckerberg promised that the virtual worlds it believes to be the future of the internet would be protected from the malignancies that have plagued Facebook. “Privacy and safety need to be built into the metaverse from Day 1,” he said. In some respects, it will be different this time because virtual reality is a radically different medium from Facebook or Instagram. Transparency around these rules is important because Meta has long struggled with how to moderate content on Facebook and Instagram. Meta has said it recognizes this trade-off and has pledged to be transparent about its decision-making. Screenshot via Meta From Meta’s "Responsible Innovation Principles" We went back and asked again for Meta to consider our questions. We did not release this toxic material to the larger public. Katharine Schwab / BuzzFeed News