Interview: The challenges of educating medical professionals about AI in the 21st century

Hazel Tang
5 min readDec 3, 2020

“Artificial Intelligence (AI) is universal and ultimately it’s not that hard,” said Dr. Dennis Paul Wall, Associate Professor of Pediatrics (Systems Medicine); Biomedical Data Science, and by courtesy, Psychiatry and Behavioral Sciences, Stanford University during his presentation at AIMed 19.

Under the theme “educating the clinicians on AI”, Dr. Wall briefly went through some of the common failures in medicine and encouraged medical professionals to think like data scientists in order to tap into some of the new opportunities offered by technologies to build solutions that improve overall efficiency and quality of service in healthcare.

AIMed had caught up with Dr. Wall again recently, to find out from him what are some of the other challenges that medical professionals will face in this new decade as they continue to embrace AI.

AIMed: Some medical professionals believe it’s not mandatory for them to know how a clinical tool or a drug is being made as long as they know how it works, whether it’s safe to be used on patients, and its impact. With that, how much should medical professionals be educated on AI?

Dr. Dennis Paul Wall: I think we need a new breed of medical professionals who have some fundamental training in AI and machine learning (ML) rather than treating AI and ML models simply as tools that consume and churn data to generate outcomes. We are marching into a new era. Even if doctors, medical professionals are not always the ones who build the AI solutions, it will be useful for them to have a clear understanding of their strengths and weaknesses to ensure proper trustworthy deployment.

AIMed: Is “I want to learn about AI but I don’t know where to start” a common expression among medical professionals? Do you think it’s more beneficial for medical professionals to learn about AI when they have a problem in mind which they know the technology will help to solve?

Dr. Wall: Yes, it’s a good question. Doctors today, whether in research or practice, often wonder where to go to learn more about AI. But we have to remember that the AI data scientists have a very similar question: how do I find gaps or opportunities in medical practice where AI can help? Both the doc and the data scientist as individuals will struggle to identify key opportunities for AI, but together they can and will (far faster and with much higher impact). So the answer is almost always for docs and data scientists to collaborate early and often.

At Stanford, we encourage and, in certain cases, require biomedical-informatics students to shadow doctors while they practice. This fosters an invaluable two-way dialogue. The student observes and learns from the doc directly in the hospital, and the doc learns where AI can and can’t help. So, it’s a really valuable exchange.

As we infuse better AI training into medical training, I am confident that future doctors will possess at least the minimally viable core of AI skills. For now, collaboration and co-location of AI and medicine will help bring inspired AI into practice for the better.

AIMed: What are some of the common challenges medical professionals face when learning about AI?

Dr. Wall: Doctors new to AI have a notion that the model is a black box, or that the features used for AI may not be clinically meaningful (to them). Trust can be a really important issue. With time, AI can earn trust by increasing efficiency, minimizing work, being “user-friendly” and easy to use, and of course by being accurate. Yet, earning that trust takes time — and this can be a challenge.

Part of building this trust bridge is training and building understanding of the data science behind AI. How was the model trained, tested, and validated? What is a good, balance measure of accuracy? These are complicated questions which medical professionals need to digest.

Another part is where the AI will “operate” within the healthcare workflow. AI for medicine has seen its biggest successes in imaging and radiology. This arena is enriched for training data and the models produce reliable trustworthy outcomes that doctors can work with. If doctors and data scientists can work together on building a bigger diversity of training libraries, we will quickly see similar success in nearly every facet of medical practice. Trust will grow and doctors will increasing consider AI a partner in the decision process.

AIMed: Based on what you have described, it appears to be more practical for medical professionals to form some sort of partnership with those in field of AI rather than venturing into it all alone?

Dr. Wall: Absolutely. This is the best way for emergent discoveries and disruptive innovations to arise. Partnerships must be formed and data science really needs to become part of the fabric of medical operations. With a collaborative back-and-forth exchange — data scientist to clinician — we will expose innovation opportunities everywhere in the healthcare ecosystem.

AIMed: In that case, how do you think AI will change medical education in this new decade?

Dr. Wall:
I think a possibility could be as one acquire medical training, he or she may also need to learn specifically about AI, not only the fundamentals like linear or non-linear models but also something more detailed like deep learning, which really wasn’t a thing until 2012. Apart from standard training, there will probably be more opportunities for medical professionals within their sub-specialties, to attempt to build a model at least. I think this is really important for clinicians to be challenged to find a channel that can be filled with machine learning.

Nevertheless, I think one of the drawbacks medical professionals are facing right now is data capture. Models are only as good as the training data we give them but most often, as physicians practice medicine, the way they record what they are doing remains limited. They are not detailed enough to be able to build up training data that can be used to build systems that will potentially help them to be more efficient.

AIMed: It’s interesting you mentioned about data. Some researchers have already looked into generating synthetic data in helping to train their machine learning models? What’s your thought on this?

Dr. Wall: Indeed, while we sort out the challenges of building domain specific training data -labor and privacy issues, to name two — synthetic data and other creative alternatives have value, but I think I think most agree they are inadequate. We can use these data to create a model but there will be limits to generalizability. There is an urgent need to focus on ways to capture and label data as doctors perform standard operating procedures for their specialty.

To give you a very simple example, physicians use stethoscopes to receive an audio feed an determine if there is an abnormality. The sound files are recordable and amenable to AI. We need efforts that focus on data capture the data and sound decisions can be derived.

Ubiquitous device data can and should start playing a role in labelled data capture. In turn, this will enable us to migrate some, maybe many, of the healthcare solutions from hospitals to the home.

*This interview was originally published on AIMed Blog on 30 March 2020.

--

--

Hazel Tang

Writer @RiceMedia. Beating up info till they scream stories. Words with MetroUK, gal-dem, Potluck Zine, Towards Data Science, among others. Data Enthusiast