- A promising artificial intelligence (AI)-based tool could help alleviate the burden on oncologists and improve the quality and consistency of cancer care.
- It could also deliver clinical expertise to underserved areas.
Crowdsourcing is an increasingly powerful approach for fueling innovation in many sectors but has not been routinely applied in academic medicine. Scientists at Dana-Farber/Harvard Cancer Center are exploring ways to harness this method for solutions to pressing clinical problems, such as the worldwide shortage of radiation oncologists.
Their work has led to the development of a promising artificial intelligence (AI)-based tool that can automate a complex, time-consuming aspect of radiation treatment planning for lung cancer. The tool, described online in JAMA Oncology, could help alleviate the burden on oncologists and improve the quality and consistency of cancer care. It could also deliver clinical expertise to underserved areas — like low- and middle-income countries, where lung cancer is increasingly prevalent and the number of radiation oncologists is limited.
“This work is part of a longitudinal effort to look for real-life, pressing medical questions where crowdsourcing could make a critical contribution,” says Eva Guinan, MD, director of translational research in Radiation Oncology at Dana-Farber and a senior author of the paper. “That includes asking rigorous, academic, prospective questions in the setting of those problems to study how the approach did or didn’t work and help determine what can really move medical innovation forward.”
How it worked
Guinan and her team, including longtime collaborator Karim Lakhani, a professor at Harvard Business School, and first author Raymond Mak, MD, a radiation oncologist at Dana-Farber, launched a ten-week, three-part challenge using the crowdsourcing platform Topcoder.
Participants were given access to hundreds of curated CT scans from lung cancer patients and asked to contour the tumors. This process, known as tumor segmentation, is typically performed by hand on each slice of the patient’s corresponding CT scan. It is an essential part of cancer treatment planning so tumors can be properly targeted and dosed with radiation, while sparing healthy organs and tissues.
Challenge participants were charged with developing AI-enabled algorithms to automate this process. These submissions were then used to generate the contours of a separate set of lung tumors and the results were compared to contours that had been manually defined by an expert radiation oncologist.
A total of 34 contestants submitted 45 algorithms. The top-performing algorithms from phase 2 and 3 were combined to yield a single automated tool. It functioned as well — and in some cases, better — than a panel of radiation oncologists in segmenting lung tumors on patient CT scans, and it completed the task in mere seconds (ranging from 15 to 120). Human experts typically take between 10 and 30 minutes.
“When I talk about these results, I’m always asked, ‘You are going to be out of a job now, right?’” says Mak. “I reject the idea that a radiation oncologist’s job is to sit behind a computer and draw lines around a tumor. I went to medical school to be with the patient, not the computer.”
Mak adds, “While these efficiency gains are important, the potential to improve care in both developing and developed countries is the most exciting. In the U.S, we know that there is a lot of variation between radiation oncologists’ assessments. We also know that when there is substantial deviation from standards, radiation treatment outcomes are worse — patients have a lower chance of being cured.”
Despite the tool’s impressive early performance, Guinan and Mak emphasize that it is far from a finished product. The project team is currently seeking funding and other necessary resources to support its clinical development.
The JAMA Oncology study, which included Harvard Catalyst and the Laboratory for Innovation Science at Harvard (LISH, based at Harvard Business School), also notes that crowdsourcing incorporates diverse perspectives and provides rapid, efficient iteration that is difficult to replicate.
“You can have two RO1-funded labs working on something, or a vast group of people where hundreds take at least a first look and then maybe 20 or so actually work on it,” says Guinan. “In the end, it really comes down to many more shots on goal.”