If you’ve ever had an MRI done, you would know that it’s not the most comfortable experience. They can make you feel claustrophobic, you’ll often hear loud thumping or tapping noises, and you might have to hold your breath a couple of times. But for most people, the hardest thing about getting an MRI, or magnetic resonance imagining, scan is being forced to stay completely still — sometimes for up to 90 minutes at a time.
“The machine needs to take images to measure what’s inside your body,” says Jonathan Scarlett, an assistant professor at the NUS School of Computing. But to construct high-quality, full-resolution images by directly collecting a large number of measurements “just takes way too long,” he says.
What if there was a way to take measurements more efficiently, wondered Scarlett in 2015, alongside his colleagues at École Polytechnique Fédérale de Lausanne’s Laboratory for Information and Inference Systems, where he was based at the time under the guidance of Professor Volkan Cevher.
“So we looked into methods that would allow us to take fewer measurements without sacrificing the image quality,” says Scarlett. Apart from MRI scans, there exist a range of other technologies where multitudes of information points are collected and must be processed quickly — technologies that stand to benefit from better data handling. Examples include radar, spectroscopy, and computerized tomography, just to name a few.
“It’s generally about data acquisition and how we can do it quickly and efficiently,” says Scarlett.
The chosen few
To address the problem, Scarlett and his colleagues adopted an approach that builds on a widespread technique called subsampling. The idea behind subsampling is straightforward: “If we don’t have the resources, such as time, to measure everything, we can try to measure just enough to reconstruct the image,” explains Scarlett, who continued the research after he moved to NUS last year. “Deciding what to measure and not to measure is known as subsampling.”
In the context of MRI, subsampling amounts to selecting information points known as frequencies. The new idea introduced by Scarlett and his colleagues was to use machine learning to learn which frequencies are best to sample. “Essentially, the idea is to make it automated. The fewer frequencies you need to take, the faster the signal acquisition should be. But it’s a fine balance — if we take too few samples, we risk compromising the quality of the reconstructed image,” he says.
The learning-based subsampling approach of Scarlett and his fellow researchers marked a departure from traditional approaches. Previous work typically involves using MRI data to create a mathematical model as an intermediate problem-solving step, and then “designing a subsampling technique centred around that model,” says Scarlett.
But it’s an approach that “is tricky and more complex than it needs to be,” he says. “You don’t actually need to construct a mathematical model, and you should always make sure that any intermediate step is not harder than the original problem.”
“So we tried to cut out that middle step and instead use machine learning to directly learn which frequencies to measure,” says Scarlett.
In one set of experiments, the team used a set of 100 brain images to train their algorithm to select which frequencies to use. “What we try to understand, based on these previous images, is the following: if we had to have fewer frequencies, which ones would have been the best to take?” says Scarlett.
“The idea is that if you can figure out which frequencies are the most useful based on the existing data, then when you acquire further images in the future, you can just take those useful ones.”
“Therefore, you can do it quicker,” says Scarlett.
From paper to patent
The results were encouraging. “Our approach performed well on the brain scan data that we used,” says Scarlett. In this and other experiments conducted, the new subsampling technique was “able to match or sometimes improve the state-of-the-art methods.”
Scarlett and his collaborators reported their findings in the IEEE Journal of Selected Topics in Signal Processing in March 2016. They were also granted a U.S. patent in 2018 for their learning-based subsampling technique. A second paper, which provided a more in-depth study on MRI images, was published in May 2018.
Since then, some of Scarlett’s collaborators have conducted further research in the area. They have published papers discussing how the technique has been implemented in hardware, and also applied to dynamic MRI (which Scarlett describes as being similar to normal MRI in the way that images and videos are alike, where one is just a sequence of moving images pieced together to create the other).
In the future, the team hopes to carry out tests using larger datasets and to see if machine-based subsampling can be applied to ultrasound, computed tomography, and other types of scans.
In recent months, Scarlett has turned his attention away from data acquisition to focus on image reconstruction.” The work involves inventing a better reconstruction algorithm, which he describes is what “takes all the measurements and uses it to reconstruct the pixels to get the image you want to show the doctor.”
The research is linked to his earlier machine learning subsampling work in the sense that “if a better reconstruction algorithm comes along, we can plug it into the subsampling framework in these two papers, as the technique uses whatever reconstruction algorithm it’s been given,” Scarlett says.
That’s one of the main benefits, he says. “Because it’s versatile and able to accommodate future improvements.”
This article was originally published in the NUS School of Computing website.