apple and google logos

Bioethics Forum Essay

Apple and Google Plan to Reinvent Health Care. Should We Worry?

Editor’s note: This essay responds to an invitation (issued here and here) to submit commentaries on the ethical implications of partnerships between social media companies and biomedical researchers. The invitation is ongoing. 

Wearable devices, social media platforms, and smartphone apps are increasingly being seen as new data sources, capable of capturing complex and continuous personal health information that is relevant to understanding health and disease (see for example here). This trend has opened the way for major consumer tech companies, which have had little interest in health care in the past, to enter the space of medical research. From Apple’s ResearchKit, which allows researchers to carry out medical studies on iPhones, and the company’sreported forays into DNA collection, toGoogle’s Baseline Study, which aims to paint a picture of “what it means to be healthy” based on data collected on its devices, and Google Genomics, a cloud service for genomes, Silicon Valley is intent on revolutionizing medicine.

Indeed, in comparison to the new methods of acquiring and managing data that these technologies enable, traditional research models like the randomized control trial feel painfully slow and restricted to small populations, while the computer capacities of universities and hospitals seem antiquated. In the terms we have become accustomed to hearing from Silicon Valley, medical research appears ripe for disruption. As the call for essays and some commentators have pointed out (hereandhere), however, disruptive innovation in the medical field raises a number of ethical issues that it would be important to think through before the revolution goes forward.

These ethical issues can be grouped into three types, those pertaining to: 1) research ethics, 2) privacy and data protection, and 3) new power asymmetries that can affect future research agendas.

Conducting research via a smartphone, which collects data through its various sensors, through surveys, or by pulling data from other apps and wearables, poses a number of questions that stem precisely from the less rigorous nature of digital data collection that is promoted as its biggest advantage. Researchers have voiced concerns, for example, about thequality of self-reported dataand the possible burden of an overload of “dirty” data (apps do not know if participants are who they say they are, or if they actually have the condition being researched), while thequestion of the accuracyof wearables and tracking devices is ongoing.

Informed consent in the digital context is another point of contention. Digital consent differs from traditional means of consent in that participants do not benefit from a face-to-face encounter with researchers during which they can raise questions. With ResearchKit studies, participants are shown images and sometimes short videos that help them understand how the study will run, and then they are quizzed about the material in a “yes” or “no” manner. But there is no possibility to ask for further clarification. However, some researchers view digital consent as one of themore exciting aspectsof the ResearchKit, allowing participants to read and give consent at their own pace, in their own home.

Another area of concern involves new types of population bias that may be inherent to research enabled by mobile technologies. ResearchKit in particular has come under such criticism. The large majority of smartphone users worldwide do not use iPhones, but iPhone users tend to be younger, better educated, and wealthier than other smartphone users. As thispiecepoints out, ResearchKit users will skew towards a demographic that is quite different than the one typically affected by the main diseases that the first ResearchKit apps are studying, such as cardiovascular disease and diabetes.

One of the most pressing concerns, of course, is privacy. Health-related consumer apps occupy anambiguous spacebetween the highly regulated medical domain and less regulated consumer market, and privacy rules for health information do not necessarily apply to consumer apps. This data can be – and typically is –shared and soldin ways that users are not aware of. Apple does require that apps created using the ResearchKit have an IRB approval, and has stated that it will not see data collected via the ResearchKit, which should be de-identified. Currently, for example, some of the institutions using ResearchKit areworking with Sage Bionetworks, which receives and anonymizes data before it is sent to researchers. But a number of studies have now demonstrated that anonymization can no longer be guaranteed(seehereandhere), and as health data becomes increasingly valuable, the incentives to hack, steal, and sell it will grow.

Furthermore, the privacy policies of all the apps, activity trackers, and smart scales that Apple’s HealthKit and Google Fit connect with in order to collect data are also at play here. Apple saying it won’t look at this data is one thing, but it’s difficult to imagine Google or Facebook making this claim, given that the revenue of these companies depends on the collection of personal data. Anonymization often clashes with financial interests. Google Fit, for example, could be an additional means for Google to tap into specific health-related data to sell to pharmaceutical advertisers.

The implications for research ethics and privacy that these new data collection and research approaches entail are complex and require careful reflection. But insofar as the devices and services that generate, store, and in some cases analyze these data are owned by commercial entities that are outside traditional health care and research, we also should be attentive to  new power asymmetries that may emerge in this space, and their implications for the shaping of future research agendas.

Much of the enthusiasm surrounding research using data generated from social media and mobile technology stems from the easy access to large data sets. But if for-profit companies become mediators, gatekeepers, or proprietors of data sets that are to some extent considered a public good, the question of who getsaccess, for what purposes and at what cost needs to be considered. Companies could restrict access to data sets either to their own researchers (Google and Apple are recruiting, seehereandhereor to researchers from institutions that pay the right price. This may create a new kind of digital divide between big data “haves” and “have-nots,” based on access to and control over new technological infrastructures, databases, and forms of expertise, as a number ofsocial scientistshave begun pointing out.

We may also question the nature of the new research partnerships being struck between research institutes and tech corporations. Terminology can be telling: the announcement on Duke’s website of its collaboration with Google on the Baseline Study read,“Duke and Stanford to Assist Google X”– not the other way around. Asphilosophers and sociologists of sciencehave argued,whoasks questions in science determineswhichquestions get asked. In this context, we might expect to find new biases in the types of research that emerge.

If new ways of generating large datasets and new methods for analyzing them can propel health care and medical research into the future, as some commentators maintain, then what is at stake here is the question of who will be in control of future research agendas. In light of the implications for research ethics and privacy touched upon here, we need to carefully reflect on how disruption is being carried out, but also on who should be driving it.

Tamar Sharon is an assistant professor of philosophy of technology at Maastricht University, NL, and a member of the Data and Information Technologies in Health and Medicine Lab, King’s College London.

Read More Like This