Towards Safer X-rays
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
High-intensity X-ray radiation through the human body can cause damage to cells,
increasing the chance of complications such as cancer. A possible solution is lowering
the dosage of radiation. In low dose CT (LDCT) images, fundamental structures are
still easily identifiable. However, noise and other additional artifacts are introduced.
Removing the visual effects of artifacts caused by lowering radiation dose has been an
active area of research in the last few years. Recently, deep learning approaches have
demonstrated impressive performance for LDCT denoising. In this thesis, we propose a
new machine learning-based approach for LDCT noise reduction that outperforms other
methods.
Deep Learning is based on the idea of stacking many layers of neurons together. Over
the past years, deep learning researchers have successfully optimized the performance
of neural networks by stacking more layers. With the growing availability of high performance GPUs as well as more massive datasets, deep learning technology has proven
very useful. However, deeper networks are more challenging to train, not because of their
computational cost, but due to the difficulty of propagating gradients through so many
layers.
Deeper neural networks are more complex to train. We present a residual framework
to ease the training of networks that are substantially deeper than the others. Residual
learning means each subsequent layer in a deep neural network is only responsible for,
fine-tuning the output from a previous layer, which is possible only by adding a learned
"residual" to the input. This method differs from a more traditional approach where
each layer had to generate the total desired output. By using residual learning in LDCT
denoising, we prevent degradation of training accuracy once traversing the network and,
increase the training pace. Another aspect of our work is using Generative Adversarial Network (GAN), which
is a framework for estimating generative models via an adversarial process. We simultaneously train two models a generative model G, that we are using for generating Normal Dose CT (NDCT) and a discriminative model D that estimates the probability that a
sample came from the training data rather than G. GANs’ potential is enormous since
they can learn to mimic any distribution of data.
The novelty of our approach is in combining Residual learning and GAN. For training
a convolutional neural network (CNN), a large amount of data is needed. We address this
problem by using patch coding. Inspired by the idea of deep learning, we combine the
autoencoder, deconvolutional network, and skip connections into residual learning. One
motivation for skipping over the layers is to avoid the problem of gradient vanishing. Our
experiments show that our method outperforms recent works on LDCT image denoising
in terms of Peak SNR (PSNR), and SSIM.