The following content is from the translation of the original (excerpt)
5 June 2024
Oran Lang, software engineer at Google Research, Heather Cole-Lewis, clinical scientist at Health Equity, Google Core
We propose a framework for understanding artificial intelligence models in medical imaging, leveraging generative artificial intelligence and cross-disciplinary expert reviews to identify and interpret visual cues related to model predictions.
Machine learning (ML) has the potential to revolutionize health care, from reducing workload and increasing efficiency to discovering new biomarkers and disease signals. To take advantage of these benefits responsibly, researchers employed interpretability techniques to understand how machine learning models make predictions. However, current methods based on saliency highlight important image areas and often fail to explain how specific visual changes drive machine learning decisions. Visualizing these changes (we call them “attributes”) helps ask about aspects of bias that are not easily apparent from quantitative indicators, such as how to plan datasets, how to train models, problem formulation, and human-computer interaction. These visualizations can also help researchers understand whether these mechanisms may represent new insights for further research.
In “Using Generative AI to Study Medical Imaging Models and Datasets” published in The Lancet eBioMedicine, we explore the potential of generative models to enhance our understanding of medical imaging ML models. Based on the previously released StylEx method, which generates visual interpretations of classifiers, our goal is to develop a universal method that can be widely used in medical imaging research. To test our method, we selected three imaging modalities (external eye photos, fundus photos, and chest X-rays [CXR]) and eight prediction tasks based on the latest scientific literature. These include established clinical tasks that serve as “positive controls” where known attributes help predict, and tasks that clinicians are not trained to perform. For external eye photos, we examined a classifier that was able to detect signs of disease from images of the front of the eye. For fundus photos, we examined classifiers, which showed surprising results in predicting cardiovascular risk factors. Also, for CXR, we examined the anomaly classifier and its surprising ability to predict race.
GenAI framework for studying medical imaging models and datasets
Our framework is divided into four key phases:
Classifier training:
We train machine learning classifier models to perform specific medical imaging tasks, such as detecting signs of disease. The model will be frozen after this step. If the model of interest is already available, you can use it in a frozen state without further modification of the model.
StylEx Training:
We then trained a StylEx generation model that included a StyleGAN-v2-based image generator with two additional losses. The first additional loss is the autoencoder loss, which teaches the generator to create an output image similar to the input image. The second loss is the classifier loss, which causes the classifier probability of the generated image to be the same as the classifier probability of the input image. Overall, these losses allow the generator to generate images that look realistic and retain the classifier’s predictions.
Automatic attribute selection:
We use the StylEx model to automatically generate visual attributes by creating counterfactual visualizations for a set of images. Each counterfactual visualization is based on a real image, but modified using the StylEx generator, changing one attribute at a time (see animations below). Attributes are then filtered and sorted to retain those that have the greatest impact on classifier decisions.
Expert group review:
Finally, an interdisciplinary expert team, including relevant clinical experts, social scientists, etc., analyzed the identified attributes and interpreted them in their medical and social contexts.
conclusion
Our research demonstrates the potential of generative models to enhance the interpretability of machine learning models in medical imaging. By combining technological progress with interdisciplinary expertise, we can responsibly use artificial intelligence to discover new knowledge, improve medical diagnoses, and address prejudices in health care. We encourage further research in this area and emphasize the importance of collaboration between machine learning researchers, clinicians and social scientists.
If you want to learn more, you can click on the link below the video.
Thank you for watching this video. If you like it, please subscribe and like it. thank
Original text:https://research.google/blog/using-generative-ai-to-investigate-medical-imagery-models-and-datasets/
Oil tubing: