A few weeks ago, the New York Times revealed that British political data consulting firm Cambridge Analytica1, improperly accessed, manipulated and retained the data of over 50 million Facebook users with the help of Cambridge University researcher Aleksandr Kogan.2
The Cambridge Analytica scandal highlights a serious research ethics problem. In this case, an academic researcher who may have had legitimate permissions to access Facebook users, behaved badly to say the least. Using social media networks for research is still in its infancy – we have much to learn and there will be mistakes. Cambridge Analytics is an example of deceit that has the potential to make all academic researchers look bad. Yet, there are a number of honest and well meaning researchers exploring new uses of social media platforms like Facebook, Twitter and Instagram who are in uncharted territory operating without much in the way of ethical guidelines. Perhaps the Cambridge Analytica case provides the much needed opportunity to shine a light on the lack of infrastructures needed to guide responsible research practices? Let’s hope so and do our part by taking action.
The default, for academic researchers, is to punt the responsibility for regulatory compliance and ethical oversight to the local research ethics board or, Institutional Review Board (IRB). However, in the new age of digital health research that involves the use of mobile apps and passive and pervasive sensing tools (e.g., wearable fitness trackers, apps that can detect blood glucose), the IRB may not be up to speed on the technologies and, as such, unable to properly evaluate risks in relation to potential study benefits. Research conducted with IRB members revealed growing concerns that accurate assessment of study risks and appropriate risk management strategies are increasingly difficult in health research using MISST1 tools and strategies.3 Study participants disclosed an interest in finding experts who could assist with evaluating these studies.
Sara Holoubek, CEO of Luminary Labs recently posed the question: “In a world where any data can be considered health data, how are organizations navigating ethical boundaries?” Her interviews with several leaders in the health-tech arena revealed an absence of standards and lack of attention to data security. Moreover, the research ecosystem not only includes traditional academic researchers and pharma who are socialized to work with IRBs and reflect on research ethics but, also tech industry researchers who may not have a ethics review process in place. Holoubek urges big tech leaders to reframe their thinking and consider hiring “health tech ethicists” to lead decision making when navigating the uncharted waters of emerging technology.4
Regardless of whether you are a researcher, tool-maker, ethicist or regulator, we all have a part to play in prioritizing ethical considerations in tech-enabled health research environment. There is an urgent need for academic, pharma and tech to collaborate in shaping the infrastructures to guide ethical digital health research. The Connected and Open Research Ethics (CORE) initiative has begun to do just that. The CORE is a research ethics learning community of over 500 digital health stakeholders working to navigate the ethical, regulatory and social implications of tech-enabled research. CORE provides an environment to address challenges through its Forum and Resource Library. We invite you to join in to contribute to the development of guiding principles that will shape ethical practices in the digital age.
Learn more about the CORE Project by visiting: thecore-platform.ucsd.edu
Follow us on Twitter: @ReCODE_Health
Works Cited