COVID-19: A Test of Trust

Hazel Tang
5 min readDec 3, 2020

Health institutions are sharing our data yet the COVID-19 pandemic has shown we still remain doubtful that AI solution can save us from a global health crisis. Hazel Tang asks whether we need to trust more.

In 2013, the National Health Service (NHS) England gradually rolled out a new service — NHS 111 which enables the public to seek medical advice or treatment for “urgent but not life-threatening” incidences. This 24/7 hotline, and subsequent web-based assessment, has become a lifeline for many in the ongoing COVID-19 pandemic with nearly two million inquiries.

But recently, it’s been reported that data collected through NHS 111 and other NHS platforms will be used to build new virtual tools, including dashboards that help to direct supplies to emerging infection hotspots or hospitals with greater needs; channel patients to facilities with higher staffing capacity, and to help understand how the virus is spreading at community level to highlight its risks to vulnerable populations.

It’s expected that information such as the type of ventilators used; capacity of Accident and Emergency (A&E) departments; lengths of hospitalizations for COVID-19 patients; number of NHS staff getting sick will be acquired to build these new tools.

Tech companies such as Amazon, Microsoft and Palantir will be providing cloud computing services; data storage and data gathering software respectively to the NHS in this effort. However, this has sparked privacy concerns, particularly given Palantir’s previous involvement with Cambridge Analytica. Many commentators felt the decision to collaborate was made in haste with many details not being given sufficient scrutiny.

“It’s an extremely unprecedented situation,” says Eleonora Harwich, Director of Research and Head of Digital and Tech Innovation of Reform, a leading UK think-tank for public service reform. “People tend to be more lenient when patient benefit is of concern but the outcome can also be worrying. What is the scope of these companies’ involvement in the future? Sometimes, an emergency does create a kind of short-sightedness which prevents us from fully calculating the future ramifications of decisions made right now.”

Palantir is believed to be processing NHS data on two of its platforms, Gotham and Foundry and will be working closely with NHSx, the digital arm of the NHS. The company came on board through Faculty, a British artificial intelligence (AI) startup whose owner, Marc Warner is the brother of Ben Warner, who was recruited by the UK government as a data science advisor last December.

Previously, Ben Warner is reported to have produced data models in both the Conservative Party’s general election campaign as well as Vote Leave’s campaign for the UK to leave the European Union. He is now part of the Scientific Advisory Group for Emergencies (Sage). However, there is no evidence linking Ben Warner to Palantir or suggestions that the NHS’ move to share data with Palantir was politically motivated.

Of course, the NHS’ stand is simple, they want to leverage technology for a quick relief to the pandemic situation.

“I support the involvement of private companies at the point of crisis because I do think they have a very big role to play,” says Harwich. “I can’t imagine the insane pressure that NHSx is now facing. It’s normal that mistakes will be made under such circumstances. Besides, the situation has presented itself in a rather dichotomous way: It’s either you trust these companies to work on something in the interests of the public or you share none of this data.”

“But this kind of questionable corporate behavior, acting within the public sector, is really massive and goes beyond healthcare. I also wonder how aware the public is and how much influence do they have on such matters. At the end of the day, I think we all have to make some kinds of trade-offs — we’re lured by the convenience brought about by technology to the point that we think a little less about what is being done with our data.”

Indeed, one not only has to trust developers in the way they collate and store our data but we also have to trust them that the end product is reliable enough for AI to exercise real impacts. For example, BlueDot, a Canadian global health platform built around AI, machine learning and big data to track and predict the outbreak and spread of infectious diseases, alerted its private sector and government clients to a cluster of “unusual pneumonia” cases occurring around a market in Wuhan, China on 31 December.

Yet it was another nine days before the Who released its statement alerting people to the emergence of a novel coronavirus. In hindsight, BlueDot proved to be an invaluable early warning system, but the uncomfortable question remains, would a pandemic have been even able to make root had more trust been placed in the surveillance platform’s algorithm?

But while we may not be ready to fully trust warnings given by technology, we still appear happy to turn to it for a quick remedy. In the US, manufacturers can apply for emergency use authorization (EUAs) from the Food and Drug Administration (FDA) during COVID-19. This pathway allows certain unapproved or uncleared products — medical devices drugs, and biological products — to be brought to market quickly.

“These EUAs, are necessary because we need more in vitro diagnostic tests, respirators, ventilators and the like,” explains Sara Gerke, Research Fellow, Medicine, AI and Law at Harvard Law School. “But I worry that we may be hurrying things at the expense of safety and effectiveness”.

Gerke emphasises that manufacturers have an ethical duty to ensure their products are safe and reliable. “It’s very important for developers to follow the law and also think about the ethical issues, such as informed consent, biases and data privacy. We have a lot of startups in the field with most of them primarily focusing on getting their AI to work. But I urge AI developers to also think about the ethical and legal aspects early on in the design process such as whether their product is legally classified as a medical device and thus whether they need to undergo premarket review.”

“During a public health emergency, public-private partnerships are obviously essential. But these collaborations should not be accelerated to the point where the parties have not carefully considered the terms of their agreement.” That said, Gerke believes AI can be very useful for our society if developed carefully and ethically but has a warning. “Even if we have a perfect AI built in the lb, it also needs to be beneficial in the setting that it will be deployed. Human behaviour still plays a hugely significant role here. So it’s crucial that we ensure that public trust is maintained and promoted at all times”.

The article was published in AIM Magazine Vol 3, #2 — Global Health issue, pages 48–49, debut June 2020.

--

--

Hazel Tang

Writer @RiceMedia. Beating up info till they scream stories. Words with MetroUK, gal-dem, Potluck Zine, Towards Data Science, among others. Data Enthusiast