September 27, 2017

Julia Powles, a research fellow at Cornell and NYU Law, gave a talk at SIPA on September 20 about her investigative work in the field of Internet surveillance, data sharing, and technology law and policy.

Powles, who has written about the intersection of technology and law for Wired and the Guardian, has also worked with telecom institutions like the International Telecommunications Union. Given the rapid evolution of technology and data sharing, she said she found herself deeply intrigued by the interaction of national security and the tech interface and “a whole bunch of issues in that area of public law which were very interesting that now are quite central to technology issues.”

Powles said she believes that data sharing today presents challenges related to intellectual property.

“A lot of the time we transfer data and don’t control it,” she said. “It is similar to copyright law, there is a similar challenge with the arrival of digital and international services.”

Powles’s talk centered on her experience researching and writing a paper with journalist Hal Hodson about a controversial deal between Google’s DeepMind and the United Kingdom’s National Health Service (NHS). The fascinating case offered Powles and Hodson an opportunity to explore the various loopholes in the still nebulous laws that govern data sharing, artificial intelligence (AI) and tech surveillance.

“This DeepMind case is a wonderful case to get our heads around, the way it’s being dealt with shows a lot about where we’re at in terms of exuberance around AI, and the power imbalances in the tech industry and regulators and public servants and hapless NHS in this case,” she said.

In February last year, DeepMind—an artificial intelligence company acquired by Google—announced that it was going to embark on a “pioneering project to build and alert system around acute kidney [injury], a condition which has led up to 40,000 deaths a year in the U.K.” The agreement between Google and NHS made the patient records of about 1.6 million North Londoners available to Google.

“There was this incredible memorandum of understanding associated with it, if you can imagine the tradeoff,” Powles explained. “What the hospital got was reputational game by being associated with one of the companies that was at the leading edge of AI and a seat at the table of one of the most exciting developments in this field. What did DeepMind get, access to data for free.”

Hodson had initiated an investigation into this breach of data protection, and Powles joined forces with him to work on a paper highlighting the glaring privacy violations inherent in the deal—privacy violations that even policy professionals and people in positions of influence ignored.

Hodson’s story for the Daily Mail about the case and their research paper encouraged the Information Commissioner’s Office (ICO) to investigate the deal and raised important questions about exceptions to data protection under the purview of direct care.

“Direct care means you are developing an app for some people and you’re taking everybody’s data, and I think that would a be fundamental shift in data protection and in doctor-patient confidentiality that we have seen before,” she said. “I think this is so interesting because of the political poignancy of this company.”

The mismatch in the data that was processed and accessed by Google to serve a subset of the patients, the “unfettered” nature of the contract, and the lack of “data minimization” were of great concern to Powles and Hodson.

This is not to say Powles is not a believer in tech innovation, but she is deeply interested in the regulatory aspect of such data exchanges, especially given the context of geopolitics and power dynamics.

“I am super pro-data driven innovation,” she said. “I’ve worked on large scale public health studies where you need access. I totally understand it—but the thing here was that to develop the service for some of the patients, they just had this data without any restrictions.”

 

— Neha Sharma MPA ’18