Thanks to the development of more effective cancer treatments, people diagnosed with cancer today live longer and enjoy a better quality of life than ever before. One of the main treatment forms is radiotherapy, in which ionizing radiation is directed towards a tumor with the aim of killing the cancerous cells while sparing healthy tissue. Radiotherapy is cost-effective compared to the alternatives, yet the increasing demand together with the overall need to reduce healthcare spending create a strong pressure to further streamline the treatments.
In inverse planning, treatment plans are generated by solving an optimization problem that balances various conflicting objectives, such as high dose to target, normal tissue sparing and treatment complexity. Commonly, the different criteria are combined using a weighted sum, where each weight determines the relative importance of that criterion.
Finding acceptable weights is often a manual and tedious process of trial-and-error, especially so because evaluating a single choice of parameters requires solving the full optimization problem, which may take from a few seconds up to an hour depending on the application. Because of the interactivity of this procedure, it is highly desirable to reduce the solution times as much as possible.
I performed much of the research groundwork for the next-generation treatment optimizer for Gamma Knife radiosurgery. This included new convex surrogates for common clinical objectives [P4, P9] as well as techniques for reducing the problem size [P2] and making it more amenable to off-the-shelf optimization solvers [J3].
I also invented AI-powered techniques for compressing the optimization problem [P11] and for automatically assigning weights based on historical treatment data [P7]. Others have proposed methods that directly predict the optimized dose distribution, but these normally fail to take the limitations of the treatment delivery into account . We, on the other hand, developed a method that incorporates the constraints directly into a neural network [P10], which means that the predicted treatment plan is actually realizable.
Another computational bottlenecks in inverse planning is dose calculation, i.e. simulation of how the ionizing radiation interacts and deposits dose. I was part of a team that explored a deep learning-based method that could accurately “fast-forward” the results of a short (and very noisy) dose calculation into an accurate one [P6].
Most dose calculation methods require a CT image as input, but in clinical reality it is not always available. Targeting such scenarios, I’ve both developed methods that are robust to missing modalities [P3] and explicitly synthesize the missing data [J1].
Finally, I expect that future treatment planning software will consists of hybrid learned/programmed modules chained together and trained in an end-to-end fashion. In pursuit of this vision I’ve explored how the dose calculation step can be implemented via so called differentiable programming [P8].