185,000+ IoT security cameras are vulnerable to a new worm. Persirai is a new strain of Internet of Things malware that infects more than 1,250 models of security camera, all manufactured by an unnamed Chinese manufacturer that has sold at least 185,000 units worldwide. The vulnerability the malware exploits was discovered and documented by Pierre Kim, an independent security researcher, who has located at least 185,000 vulnerable devices using the Shodan search engine. The cameras all try to tunnel out of their local firewalls by sending unencrypted data over UDP -- a cousin to TCP -- leaving them vulnerable to hijacking. The Internet of Things Needs a Code of Ethics - The Atlantic. In October, when malware called Mirai took over poorly secured webcams and DVRs, and used them to disrupt internet access across the United States, I wondered who was responsible.
Not who actually coded the malware, or who unleashed it on an essential piece of the internet’s infrastructure—instead, I wanted to know if anybody could be held legally responsible. Could the unsecure devices’ manufacturers be liable for the damage their products? Right now, in this early stage of connected devices’ slow invasion into our daily lives, there’s no clear answer to that question.
AI Ethics: The Privacy Challenge — Call for Papers. Abstracts due to guest editors: 15 July 2017 Manuscripts due in ScholarOne: 1 October Publication date: May/June 2018 Sixty-five years after Alan Turing published “Computing Machinery and Intelligence,” (Mind, vol. 59, no. 236, pp. 433–460) thinking machines have matured from scientific theory into practical reality.
Developers deploy artificial intelligence in domains ranging from social networks, autonomous vehicles, and drones to speech and image recognition, universal translators, precision medicine, criminal justice, and ad targeting. Gdpr part 1 toolkit mapping may2016. AI, machine learning and personal data. By Jo Pedder, Interim Head of Policy and Engagement.
Today sees the publication of the ICO’s updated paper on big data and data protection. But why now? What’s changed in the two and a half years since we first visited this topic? Well, quite a lot actually: big data is becoming the norm for many organisations, using it to profile people and inform their decision-making processes, whether that’s to determine your car insurance premium or to accept/reject your job application;artificial intelligence (AI) is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ power behind virtual personal assistants and smart cars; andmachine learning algorithms are discovering patterns in data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.
The complexity and opacity of these types of processing operations mean that it’s often hard to know what’s going on behind the scenes. New Information Rights Strategy. Machine Learning with Personal Data by Dimitra Kamarinou, Christopher Millard, Jatinder Singh. 23 Pages Posted: 8 Nov 2016 Dimitra Kamarinou Queen Mary University of London, School of Law - Centre for Commercial Law Studies Christopher Millard Queen Mary University of London, School of Law - Centre for Commercial Law Studies; Oxford Internet Institute Jatinder Singh University of Cambridge - Computer Laboratory Date Written: November 7, 2016 Abstract This paper provides an analysis of the impact of using machine learning to conduct profiling of individuals in the context of the EU General Data Protection Regulation.
We look at what profiling means and at the right that data subjects have not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects concerning them or significantly affect them. Data protection laws and AI: What can we learn from the GDPR? Written by Giangiacomo Olivi Connected devices that exchange substantial volumes of data come with some obvious data protection concerns.
Such concerns increase when dealing with artificial intelligence or other devices/robots that autonomously collect large amounts of information and learn though experience. Although there are not (yet) specific regulations on data protection and artificial intelligence (AI), certain legal trends can be identified, also taking into account the new European General Data Protection Regulation (GDPR). Accountability. The GovLab Selected Readings on Data Governance - The Governance Lab. Jos Berens (Centre for Innovation, Leiden University) and Stefaan G.
Verhulst (GovLab) As part of an ongoing effort to build a knowledge base for the field of opening governance by organizing and disseminating its learnings, the GovLab Selected Readings series provides an annotated and curated collection of recommended works on key opening governance topics. In this edition, we explore the literature on Data Governance. To suggest additional readings on this or any other topic, please email email@example.com.
Context. Medical Devices Are the Next Security Nightmare. Hacked medical devices make for scary headlines.
Dick Cheney ordered changes to his pacemaker to better protect it from hackers. Johnson & Johnson warned customers about a security bug in one of its insulin pumps last fall. And St. Jude has spent months dealing with the fallout of vulnerabilities in some of the company’s defibrillators, pacemakers, and other medical electronics. You’d think by now medical device companies would have learned something about security reform. As hackers increasingly take advantage of historically lax security on embedded devices, defending medical instruments has taken on new urgency on two fronts.
“The entire extortion landscape has changed,” says Ed Cabrera, chief cybersecurity officer at the threat research firm Trend Micro. The Internet of Health Care Implanted medical device hacks are so memorable because they’re so personal. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the... Abstract Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR.
This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Drones, Phones and Automobiles: How to reap the benefits of innovative technology AND foster trust - NewsroomNewsroom. The civilian drone market is taking off.
Drones could revolutionise current services, open up new ones and even improve people’s quality of life, but will consumers have to give up a little privacy to reap these benefits or can good privacy practices help enable innovation in this fast-moving new frontier? An estimate by PwC puts the value of business services using drones at £102bn by 2025 . Another by the Teal Group predicts the global aerial drone market will be £11.27bn by 2025 . Businesses of all kinds can see the competitive advantages that drones might give them – they can deliver packages, conduct surveys, produce accurate maps, inspect power lines, monitor rail tracks, patrol perimeter fencing and a lot more.
If you include their terrestrial equivalents, they are also carving out a niche as digital assistants that can listen out for instructions around the home or monitor elderly relatives.
AI and Ethics - Copy.