Nvidia researchers develop AI system that generates synthetic scans of brain cancer

Artificially intelligent (AI) systems are as diverse as they come from an architectural standpoint, but there’s one component they all share in common: datasets. The trouble is, large sample sizes are often a corollary of accuracy (a state-of-the-art diagnostic system by Google’s DeepMind subsidiary required 15,000 scans from 7,500 patients), and some datasets are harder to find than others.

Researchers from Nvidia, the Mayo Clinic, and the MGH and BWH Center for Clinical Data Science believe they’ve come up with a solution to the problem: a neural network that itself generates training data — specifically, synthetic three-dimensional magnetic resonance images (MRIs) of brains with cancerous tumors. It’s described it in a paper (“Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks”) being presented today at the Medical Image Computing & Computer Assisted Intervention conference in Granada, Spain.

“We show that for the first time we can generate brain images that can be used to train neural networks,” Hu Chang, a senior research scientist at Nvidia and a lead author on the paper, told VentureBeat in a phone interview.

The AI system, which was developed using Facebook’s PyTorch deep learning framework and trained on a Nvidia DGX platform, leverages a general adversarial network (GAN) — a two-part neural network consisting of a generator that produces samples and a discriminator, which attempts to distinguish between the generated samples and real-world samples — to create convincing MRIs of abnormal brains.

The team sourced two publicly available datasets — the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) — to train the GAN, and set aside 20 percent of BRATS’ 264 studies for performance testing. Memory and compute restraints forced the team to downsample the scan from a resolution of 256 x 256 x 108 to 128 x 128 x 54, but they used the original images for comparison.

The generator, fed images from ADNI, learned to produce synthetic brain scans (complete with white matter, grey matter, and cerebral spinal fluid) given an image from the ADNI. Next, when set loose on the BRATS dataset, it generated full segmentations with tumors.

The GAN annotated the scans, a task that can take a team of human experts hours. And because it treated the brain and tumor anatomy as two distinct labels, it allowed researchers to alter the tumor’s size and location or to “transplant” it to scans of a healthy brain.

“Conditional GANs are perfectly suited for this,” Chang said. “[It can] remove patients’ privacy concerns [because] the generated images are anonymous.”

So how’d it do? When the team trained a machine learning model using a combination of real brain scans and synthetic brain scans produced by the GAN, it achieved 80 percent accuracy — 14 percent better than a model trained on actual data alone.

“Many radiologists we’ve shown the system have expressed excitement,” Chang said. “They want to use it to generate more examples of rare diseases.”

Future research will investigate the use of higher-resolution training images and larger datasets across diverse patient populations, Chang said. And improved versions of the model might shrink the boundaries around tumors so that they don’t look “superimposed.”

It’s not the first time Nvidia researchers have employed GANs in transforming brain scans. This summer, they demonstrated a system that could convert CT scans into 2D MRIs and another system that could align two or more MRI images in the same scene with superior speed and accuracy.

Artificial intelligence is transforming health care, which is hardly news to folks who’ve followed Google subsidiary DeepMind’s collaboration with the U.K.’s National Health Service or Nvidia’s recent investments in medical imaging. But for those who haven’t, a report published today by research firm CB Insights nicely sums up the state of the sector.

CB Insight’s latest AI in Healthcare dispatch packs more than a few juicy nuggets, including this headliner: AI startups have raised $4.3 billion across 576 funding rounds since 2013, topping all other industries. Another shocker? In the first half of 2018, China leapfrogged the U.K. to become the second-most active country for health care deals.

“AI in health care is geared toward improving patient outcomes, aligning the interests of various stakeholders, and reducing costs,” analysts for CB Insights wrote. “Chinese big tech companies are now entering into health care AI with strong backing from the government and are bringing products from other countries to mainland China through partnerships.”

Pharmaceutical companies are taking an interest in AI, the report noted, particularly in startups that aim to expedite drug discovery. In May 2018, Pfizer entered into a strategic partnership with XtalPi, a company developing “computation-based rational drug design.” Movers and shakers including Novartis, Sanofi, GlaxoSmithKline, Amgen, and Merck have followed suit with similar arrangements.

Fortunately for them, the Federal Food and Drug Administration has fast-tracked certain categories of AI services, opening “commercial pathways” for the more than 70 AI imaging and diagnostics companies that have raised equity since 2013.

“The [agency] is focused on clearly defining and regulating ‘software-as-a-medical-device’, especially in light of recent rapid advances in AI,” said the report.

Despite AI’s recent encroachments, CB Insights predicts it won’t replace clinicians anytime soon. Machine learning algorithms learn from annotated datasets; Google’s DeepMind, for example, trained an eye disease-screening model on 14,884 labeled scans. And humans — not machines — are doing the bulk of the labeling.

“The samples needed to be annotated by specialists, because if a sample doesn’t have any annotation we don’t know if this is a healthy person or if it’s a sample from a sick person … This was a pretty important step,” Dr. Min Wanli, an Alibaba Cloud executive, said in a 2017 interview.

That said, you can look forward to a future with fewer doctor visits. Self-diagnostic apps like Dip.io, which uses a urinalysis dipstick and computer vision algorithms to analyze test strips via a smartphone, and Biofourmis, which pulls data from wearables to predict health outcomes, have been given the green light by regulators in the U.S.

The analysts concluded: “Artificial intelligence is turning the smartphone and consumer wearables into powerful at-home diagnostic tools.”

error: Content is protected !!