Thursday, September 24, 2020

The use of artificial intelligence (AI) is already wide-spread in health care—it can be used to read X-rays, predict treatment responses, individualize prescriptions, and support clinical decisions. Machine learning systems are effective because of the vast amounts of data used to inform them and, in the case of X-rays, CT scans, and other medical images, thousands of samples annotated by doctors need to be uploaded. 

To address concerns around patient privacy and data security in medical AI, Stephen Baek, assistant professor of industrial and systems engineering at the University of Iowa, along with UI investigators Xiaodong Wu, professor of electrical and computer engineering, and Nick Street, professor of business analytics, received a $1 million phase one grant from the National Science Foundation (NSF) to lead a multi-university and industry collaboration to develop a new machine-learning platform to train medical AI models with data from around the world.

“Traditional methods of machine learning require a centralized database where patient data can be directly accessed for training a machine learning model,” said Baek. “Such methods are impacted by practical issues such as patient privacy, information security, data ownership, and the burden on hospitals, which must create and maintain these centralized databases.” 

Baek’s team is developing a decentralized, asynchronous solution called ImagiQ, which relies on an ecosystem of machine learning models — essentially AI systems that function as experts — so institutions can select models that work best for their populations, uploading and sharing the models, not patient data, with each other. As each institution improves the model using their local patient data sets, models will be uploaded back to a centralized server. This ensemble learning approach will allow the most reliable and efficient models to come to the forefront, thereby building a better AI system for analyzing medical images such as lung X-rays that show COVID-19 or CT scans that detect tumors.

The research team is part of the AI-driven data and model sharing track topic under the 2020 cohort NSF Convergence Accelerator program, designed to leverage a convergence approach to transition basic research and discovery into practice. The UI-led team is composed of Stanford University, the University of Chicago, Harvard University, Yale University, and Seoul National University; as well as industry medical AI leaders from NVIDIA, Lunit, Digital Diagnostics (formerly known as IDx Technologies), Imagoworks, and inSEER. 


Over the next nine months, the team will focus on making a prototype of the system as well as participate in the Accelerator’s innovation curriculum to ensure the solution has societal impact. At the end of phase one, the team will participate in a pitch competition and a proposal evaluation and, if selected, will proceed to phase two, with potential funding up to $5 million for 24 months. 

“ImagiQ will further federated learning by decentralizing the model updates and eliminating the synchronous update cycle,” said Baek. “We are going to create a whole ecosystem of machine learning models that will evolve and improve over time. High-performing models will be selected by many institutions, while others are phased out, producing more reliable and trustworthy outputs.”

Baek added that by validating across diverse, multi-institutional patient cohorts, ImagiQ will build a better AI system for analyzing medical images. 

For more information, read the NSF Convergence Accelerator 2020 cohort award announcement.