| |
Last updated on May 5, 2020. This conference program is tentative and subject to change
Technical Program for Monday April 6, 2020
|
MoAaO1 Oral Session, Oakdale I-II |
Add to My Program |
FMRI Analysis |
|
|
|
09:00-09:15, Paper MoAaO1.1 | Add to My Program |
Diffeomorphic Smoothing for Retinotopic Mapping |
|
Tu, Yanshuai | Arizona State University |
Ta, Duyan | Arizona State University |
Lu, Zhonglin | New York Univ |
Wang, Yalin | Arizona State University |
Keywords: Brain, Functional imaging (e.g. fMRI), image filtering (e.g. mathematical morphology, wavelets,...)
Abstract: Retinotopic mapping, the mapping of visual input on the retina to cortical neurons, is an important topic in vision science. Typically, cortical neurons are related to visual input on the retina using functional magnetic resonance imaging (fMRI) of cortical responses to slowly moving visual stimuli on the retina. Although it is well known from neurophysiology studies that retinotopic mapping is locally diffeomorphic (i.e. smooth, differentiable, and invertible) within each local area, the retinotopic maps from fMRI are often not diffeomorphic, especially near the fovea, because of the low signal-noise ratio of fMRI. The aim of this study is to develop and solve a mathematical model that produces diffeomorphic retinotopic mapping from fMRI data. Specifically, we adopt a geometry concept, the Beltrami coefficient, as the tool to define diffeomorphism, and model the problem in an optimization framework. We then solve the model with numerical methods. The results obtained from both synthetic and real retinotopy datasets demonstrate that the proposed method is superior to the conventional smoothing methods.
|
|
09:15-09:30, Paper MoAaO1.2 | Add to My Program |
Improved Functional MRI Activation Mapping in White Matter through Diffusion-Adapted Spatial Filtering |
|
Abramian, David | Linköping University |
Larsson, Martin | Lund University, Combain Mobile AB |
Eklund, Anders | Linköping University |
Behjat, Hamid | Lund University |
Keywords: image filtering (e.g. mathematical morphology, wavelets,...), fMRI analysis
Abstract: Brain activation mapping using functional MRI (fMRI) based on blood oxygenation level-dependent (BOLD) contrast has been conventionally focused on probing gray matter, the BOLD contrast in white matter having been generally disregarded. Recent results have provided evidence of the functional significance of the white matter BOLD signal, showing at the same time that its correlation structure is highly anisotropic, and related to the diffusion tensor in shape and orientation. This evidence suggests that conventional isotropic Gaussian filters are inadequate for denoising white matter fMRI data, since they are incapable of adapting to the complex anisotropic domain of white matter axonal connections. In this paper we explore a graph-based description of the white matter developed from diffusion MRI data, which is capable of encoding the anisotropy of the domain. Based on this representation we design localized spatial filters that adapt to white matter structure by leveraging graph signal processing principles. The performance of the proposed filtering technique is evaluated on semi-synthetic data, where it shows potential for greater sensitivity and specificity in white matter activation mapping, compared to isotropic filtering.
|
|
09:30-09:45, Paper MoAaO1.3 | Add to My Program |
A Network-Based Approach to Study of ADHD Using Tensor Decomposition of Resting State fMRI Data |
|
Li, Jian | University of Southern California |
Joshi, Anand | University of Southern California |
Leahy, Richard | USC |
Keywords: fMRI analysis, Classification, Brain
Abstract: Identifying changes in functional connectivity in Attention Deficit Hyperactivity Disorder (ADHD) using functional magnetic resonance imaging (fMRI) can help us understand the neural substrates of this brain disorder. Many studies of ADHD using resting state fMRI (rs-fMRI) data have been conducted in the past decade with either manually crafted features that do not yield satisfactory performance, or automatically learned features that often lack interpretability. In this work, we present a tensor-based approach to identify brain networks and extract features from rs-fMRI data. Results show the identified networks are interpretable and consistent with our current understanding of ADHD conditions. The extracted features are not only predictive of ADHD score but also discriminative for classification of ADHD subjects from typically developed children.
|
|
09:45-10:00, Paper MoAaO1.4 | Add to My Program |
Dynamics of Brain Activity Captured by Graph Signal Processing of Neuroimaging Data to Predict Human Behaviour |
|
Bolton, Thomas | EPFL |
Van De Ville, Dimitri | EPFL & UniGE |
Keywords: fMRI analysis, Brain, Machine learning
Abstract: Joint structural and functional modelling of the brain based on multimodal imaging increasingly show potential in elucidating the underpinnings of human cognition. In the graph signal processing (GSP) approach for neuroimaging, brain activity patterns are viewed as graph signals expressed on the structural brain graph built from anatomical connectivity. The energy fraction between functional signals that are in line with structure (termed alignment) and those that are not (liberality), has been linked to behaviour. Here, we examine whether there is also information of interest at the level of temporal fluctuations of alignment and liberality. We consider the prediction of an array of behavioural scores, and show that in many cases, a dynamic characterisation yields additional significant insight.
|
|
10:00-10:15, Paper MoAaO1.5 | Add to My Program |
Deep Variational Autoencoder for Modeling Functional Brain Networks and ADHD Identification |
|
Qiang, Ning | Shaanxi Normal University |
Dong, Qinglin | University of Georgia |
Sun, Yifei | Shaanxi Normal University |
Ge, Bao | Shaanxi Normal University |
Liu, Tianming | University of Georgia |
Keywords: Functional imaging (e.g. fMRI), Brain, Data Mining
Abstract: In the neuroimaging and brain mapping communities, researchers have proposed a variety of computational methods and tools to learn functional brain networks (FBNs). Recently, it has already been proven that deep learning can be applied on fMRI data with superb representation power over traditional machine learning methods. Limited by the high-dimension of fMRI volumes, deep learning suffers from the lack of data and overfitting. Generative models are known to have intrinsic ability of modeling small dataset and a deep variational autoencoder (DVAE) was proposed in this work to tackle the challenge of insufficient data and incomplete supervision. The FBNs learned from fMRI were examined to be interpretable and meaningful and it was proven that DVAE has better performance on neuroimaging dataset over traditional models. With an evaluation on ADHD200 dataset, DVAE performed excellent on classification accuracies on 4 sites.
|
|
10:15-10:30, Paper MoAaO1.6 | Add to My Program |
Spectral Characterization of Functional MRI Data on Voxel-Resolution Cortical Graphs |
|
Behjat, Hamid | Lund University |
Larsson, Martin | Lund University, Combain Mobile AB |
Keywords: fMRI analysis, image filtering (e.g. mathematical morphology, wavelets,...), Brain
Abstract: The human cortical layer exhibits a convoluted morphology that is unique to each individual. Conventional volumetric fMRI processing schemes take for granted the rich information provided by the underlying anatomy. We present a method to study fMRI data on subject-specific cerebral hemisphere cortex (CHC) graphs, which encode the cortical morphology at the resolution of voxels. We study graph spectral energy metrics associated to fMRI data of 100 subjects from the Human Connectome Project database, across seven tasks. Experimental results signify the strength of CHC graphs' Laplacian eigenvector bases in capturing subtle spatial patterns specific to different functional loads as well as experimental conditions within each task.
|
|
MoAaO2 Oral Session, Oakdale III |
Add to My Program |
Disease Quantification and Surgical Planning |
|
|
Co-Chair: Kim, Namkug | Asan Medical Center |
|
09:00-09:15, Paper MoAaO2.1 | Add to My Program |
Jointly Analyzing Alzheimer's Disease Related Structure-Function Using Deep Cross-Model Attention Network |
|
Zhang, Lu | The University of Texas at Arlington |
Wang, Li | University of Texas at Arlington, Department of Mathematics |
Zhu, Dajiang | University of Texas at Arlington |
Keywords: Brain, Multi-modality fusion
Abstract: Reversing the pathology of Alzheimer's disease (AD) has so far not been possible, a more tractable way may be having the intervention in its earlier stage, such as mild cognitive impairment (MCI) which is considered as the precursor of AD. Recent advances in deep learning have triggered a new era in AD/MCI classification and a variety of deep models and algorithms have been developed to classify multiple clinical groups (e.g. aged normal control - CN vs. MCI) and AD conversion. Unfortunately, it is still largely unknown what is the relationship between the altered functional connectivity and structural connectome at individual level. In this work, we introduced a deep cross-model attention network (DCMAT) to jointly model brain structure and function. Specifically, DCMAT is composed of one RNN (Recurrent Neural Network) layer and multiple graph attention (GAT) blocks, which can effectively represent disease-specific functional dynamics on individual structural network. The designed attention layer (in GAT block) aims to learn deep relations among different brain regions when differentiating MCI from CN. The proposed DCMAT shows promising classification performance compared to recent studies. More importantly, our results suggest that the MCI related functional interactions might go beyond the directly connected brain regions.
|
|
09:15-09:30, Paper MoAaO2.2 | Add to My Program |
Pan-Cancer Prognosis Prediction Using Multimodal Deep Learning |
|
Vale Silva, Luis Andre | Heidelberg University |
Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
Keywords: Multi-modality fusion, Machine learning, Molecular and cellular screening
Abstract: In the age of precision medicine, cancer prognosis assessment from high-dimensional multimodal data requires powerful computational methods. We present an end-to-end multimodal Deep Learning method, named MultiSurv, for automatic patient risk prediction for a large group of 33 cancer types. The method leverages histophatology microscopy slides combined with tabular clinical information and different types of high-throughput sequencing and microarray molecular data. MultiSurv has high predictive performance over all cancer types after training on different combinations of input data modalities and it can handle missing data seamlessly. MultiSurv thus has the potential to integrate the wide variety of available patient data and assist physicians with cancer patient prognosis.
|
|
09:30-09:45, Paper MoAaO2.3 | Add to My Program |
Learning Amyloid Pathology Progression from Longitudinal PiB-PET Images in Preclinical Alzheimer's Disease |
|
Hao, Wei | University of Wisconsin-Madison |
Vogt, Nicholas | University of Wisconsin-Madison |
Meng, Zi Hang | University of Wisconsin-Madison |
Hwang, Seong Jae | University of Pittsburgh |
Koscik, Rebecca | University of Wisconsin-Madison |
Johnson, Sterling C. | University of Wisconsin - Madison |
Bendlin, Barbara | University of Wisconsin - Madison |
Singh, Vikas | University of Wisconsin-Madison |
Keywords: Brain, Nuclear imaging (e.g. PET, SPECT)
Abstract: Amyloid accumulation is acknowledged to be a primary pathological event in Alzheimer's disease (AD). The literature suggests that propagation of amyloid occurs along neural pathways as a function of the disease process (prion-like transmission), but the pattern of spread in the preclinical stages of AD is still poorly understood. Previous studies have used diffusion processes to capture amyloid pathology propagation using various strategies and shown how future time-points can be predicted at the group level using a population-level structural connectivity template. But connectivity could be different between distinct subjects, and the current literature is unable to provide estimates of individual-level pathology propagation. We use a trainable network diffusion model that infers the propagation dynamics of amyloid pathology, conditioned on an individual-level connectivity network. We analyze longitudinal amyloid pathology burden in 16 gray matter (GM) regions known to be affected by AD, measured using Pittsburgh Compound B (PiB) positron emission tomography at 3 different time points for each subject. Experiments show that our model outperforms inference based on group-level trends for predicting future time points data (using individual-level connectivity networks). For group-level analysis, we find parameter differences (via permutation testing) between the models for APOE positive and APOE negative subjects.
|
|
09:45-10:00, Paper MoAaO2.4 | Add to My Program |
Patient-Specific Finetuning of Deep Learning Models for Adaptive Radiotherapy in Prostate CT |
|
Elmahdy, Mohamed S. | Leiden University Medical Center |
Ahuja, Tanuj | Computer Science and Engineering, Guru Gobind Singh Indraprastha |
van der Heide, Uulke A. | The Netherlands Cancer Institute, Amsterdam, the Netherlands |
Staring, Marius | LUMC |
Keywords: Image segmentation, Prostate, Computed tomography (CT)
Abstract: Contouring of the target volume and Organs-At-Risk (OARs)is a crucial step in radiotherapy treatment planning. In an adaptive radiotherapy setting, updated contours need to be generated based on daily imaging. In this work, we lever-age personalized anatomical knowledge accumulated over the treatment sessions, to improve the segmentation accuracy of a pre-trained Convolution Neural Network (CNN), for a spe-cific patient. We investigate a transfer learning approach, fine-tuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions. The baseline CNN model is trained on a prostate CT dataset from one hospital of 379 patients. This model is then fine-tuned and tested on an independent dataset of another hospital of 18 patients,each having 7 to 10 daily CT scans. For the prostate, seminal vesicles, bladder and rectum, the model fine-tuned on each specific patient achieved a Mean Surface Distance (MSD) of 1.64±0.43 mm, 2.38±2.76 mm, 2.30±0.96 mm, and 1.24±0.89 mm, respectively, which was significantly better than the baseline model. The proposed personalized model adaptation is therefore very promising for clinical implementation in the context of adaptive radiotherapy of prostate cancer.
|
|
10:00-10:15, Paper MoAaO2.5 | Add to My Program |
Automatic Quantification of Pulmonary Fissure Integrity: A Repeatability Analysis |
|
Althof, Zachary | University of Iowa |
Gerard, Sarah E. | Brigham Women's Hospital and Harvard Medical School |
Pan, Yue | University of Iowa |
Christensen, Gary E. | The University of Iowa |
Hoffman, Eric | University of Iowa |
Reinhardt, Joseph M. | The University of Iowa |
Keywords: Computed tomography (CT), Lung, Quantification and estimation
Abstract: The pulmonary fissures divide the lungs into lobes and canvary widely in shape, appearance, and completeness. Fis-sure completeness, or integrity, has been studied to assessrelationships with airway function measurements, chronicobstructive pulmonary disease (COPD) progression, and col-lateral ventilation between lobes. Fissure integrity measuredfrom computed tomography (CT) images is already usedas a non-invasive method to screen emphysema patients forendobronchial valve treatment, as the procedure is not ef-fective when collateral ventilation is present. We describea method for automatically computing fissure integrity fromlung CT images. Our method is tested using 60 subjectsfrom a COPD study. We examine the repeatability of fis-sure integrity measurements across inspiration and expirationimages, assess changes in fissure integrity over time using alongitudinal dataset, and explore fissure integrity’s relation-ship with COPD severity.
|
|
10:15-10:30, Paper MoAaO2.6 | Add to My Program |
Spectral Data Augmentation Techniques to Quantify Lung Pathology from CT-Images |
|
Kayal, Subhradeep | Erasmus MC |
Dubost, Florian | Erasmus MC - University Medical Center Rotterdam |
Tiddens, Harm | Erasmus MC |
de Bruijne, Marleen | Erasmus MC - University Medical Center Rotterdam |
Keywords: Lung, Computed tomography (CT), Image segmentation
Abstract: Data augmentation is of paramount importance in biomedical image processing tasks, characterized by inadequate amounts of labelled data, to best use all of the data that is present. In-use techniques range from intensity transformations and elastic deformations, to linearly combining existing data points to make new ones. In this work, we propose the use of spectral techniques for data augmentation, using the discrete cosine and wavelet transforms. We empirically evaluate our approaches on a CT texture analysis task to detect abnormal lung-tissue in patients with cystic fibrosis. Empirical experiments show that the proposed spectral methods perform favourably as compared to the existing methods. When used in combination with existing methods, our proposed approach can increase the relative minor class segmentation performance by 44.1% over a simple replication baseline.
|
|
MoAaO3 Oral Session, Oakdale IV-V |
Add to My Program |
Enhancement, Denoising, Deconvolution |
|
|
Chair: Blanc-Feraud, Laure | Université Nice Sophia Antipolis, Laboratoire I3S, CNRS, INRIA |
Co-Chair: Obara, Boguslaw | University of Durham |
|
09:00-09:15, Paper MoAaO3.1 | Add to My Program |
Restoration of Marker Occluded Hematoxylin and Eosin Stained Whole Slide Histology Images Using Generative Adversarial Networks |
|
Venkatesh, Bairavi | Merck & Co |
Shah, Tosha | Merck |
Chen, Antong | Merck & Co., Inc |
Ghafurian, Soheil | Merck & Co |
Keywords: Image enhancement/restoration(noise and artifact reduction), Histopathology imaging (e.g. whole slide imaging), Machine learning
Abstract: It is common for pathologists to annotate specific regions of the tissue, such as tumor, directly on the glass slide with markers. Although this practice was helpful prior to the advent of histology whole slide digitization, it often occludes important details which are increasingly relevant to immuno-oncology due to recent advancements in digital pathology imaging techniques. The current work uses a generative adversarial network with cycle loss to remove these annotations while still maintaining the underlying structure of the tissue by solving an image-to-image translation problem. We train our network on up to 300 whole slide images with marker inks and show that 70% of the corrected image patches are indistinguishable from originally uncontaminated image tissue to a human expert. This portion increases 97% when we replace the human expert with a deep residual network. We demonstrated the fidelity of the method to the original image by calculating the correlation between image gradient magnitudes. We observed a revival of up to 94,000 nuclei per slide in our dataset, the majority of which were located on tissue border.
|
|
09:15-09:30, Paper MoAaO3.2 | Add to My Program |
Metal Artifact Reduction and Intra Cochlear Anatomy Segmentation in CT Images of the Ear with a Multi-Resolution Multi-Task 3D Network |
|
Wang, Jianing | Vanderbilt University |
Noble, Jack | Vanderbilt University |
Dawant, Benoit | Vanderbilt University |
Keywords: Image segmentation, Image synthesis, Machine learning
Abstract: Segmenting the intra-cochlear anatomy structures (ICAs) in post-implantation CT (Post-CT) images of the cochlear implant (CI) recipients is challenging due to the strong artifacts produced by the metallic CI electrodes. We propose a multi-resolution multi-task deep network which synthesizes an artifact-free image and segments the ICAs in the Post-CT images simultaneously. The output size of the synthesis branch is 1/64 of that of the segmentation branch. This reduces and the memory usage for training, while generating segmentation labels at a high resolution. In this preliminary study, we use the segmentation results of an automatic method as the ground truth to provide supervision to train our model, and we achieve a median Dice index value of 0.792. Our experiments also confirm the usefulness of the multi-task learning.
|
|
09:30-09:45, Paper MoAaO3.3 | Add to My Program |
Combining Multimodal Information for Metal Artefact Reduction: An Unsupervised Deep Learning Framework |
|
Ranzini, Marta | University College London |
Groothuis, Irme | School of Biomedical & Imaging Sciences, King's College London |
Kläser, Kerstin | Medical Physics and Biomedical Engineering Department, Universit |
Cardoso, Manuel Jorge | University College London |
Henckel, Johann | Royal National Orthopaedic Hospital NHS Trust |
Ourselin, Sebastien | University College London |
Hart, Alister | Royal National Orthopaedic Hospital NHS Trust |
Modat, Marc | King's College London |
Keywords: Image enhancement/restoration(noise and artifact reduction), Computed tomography (CT), Magnetic resonance imaging (MRI)
Abstract: Metal artefact reduction (MAR) techniques aim at removing metal-induced noise from clinical images. In Computed Tomography (CT), supervised deep learning approaches have been shown effective but limited in generalisability, as they mostly rely on synthetic data. In Magnetic Resonance Imaging (MRI) instead, no method has yet been introduced to correct the susceptibility artefact, still present even in MAR-specific acquisitions. In this work, we hypothesise that a multimodal approach to MAR would improve both CT and MRI. Given their different artefact appearance, their complementary information can compensate for the corrupted signal in either modality. We thus propose an unsupervised deep learning method for multimodal MAR. We introduce the use of Locally Normalised Cross Correlation as a loss term to encourage the fusion of multimodal information. Experiments show that our approach favours a smoother correction in the CT, while promoting signal recovery in the MRI.
|
|
09:45-10:00, Paper MoAaO3.4 | Add to My Program |
Deconvolution for Improved Multifractal Characterization of Tissues in Ultrasound Imaging |
|
Wendt, Herwig | CNRS, University of Toulouse |
Hourani, Mohamad | University of Toulouse, IRIT/INP-ENSEEIHT |
Basarab, Adrian | Université De Toulouse |
Kouamé, Denis | Université De Toulouse III, IRIT UMR CNRS 5505 |
Keywords: Ultrasound, Computational Imaging, Probabilistic and statistical models & methods
Abstract: Several existing studies showed the interest of estimating the multifractal properties of tissues in ultrasound (US) imaging. However, US images are not carrying information only about the tissues, but also about the US scanner. Deconvolution methods are a common way to restore the tissue reflectivity function, but, to our knowledge, their impact on estimated fractal or multifractal behavior has not been studied yet. The objective of this paper is to investigate this influence through a dedicated simulation pipeline and an in vivo experiment.
|
|
10:00-10:15, Paper MoAaO3.5 | Add to My Program |
A Physics-Motivated DNN for X-Ray CT Scatter Correction |
|
Iskender, Berk | University of Illinois at Urbana-Champaign |
Bresler, Yoram | University of Illinois at Urbana-Champaign |
Keywords: Computed tomography (CT), Machine learning, Computational Imaging
Abstract: The scattering of photons by the imaged object in X-ray computed tomography (CT) produces degradations of the reconstructions in the form of streaks, cupping, shading artifacts and decreased contrast. We describe a new physics-motivated deep-learning-based method to estimate scatter and correct for it in the acquired projection measurements. The method incorporates both an initial reconstruction and the scatter-corrupted measurements using a specific deep neural network architecture and a cost function tailored to the problem. Numerical experiments show significant improvement over a recent projection-based deep neural network method.
|
|
10:15-10:30, Paper MoAaO3.6 | Add to My Program |
Multi-Cycle-Consistent Adversarial Networks for CT Image Denoising |
|
Liu, Jinglan | University of Notre Dame |
Ding, Yukun | University of Notre Dame |
Xiong, Jinjun | IBM Thomas J. Watson Research Center |
Jia, Qianjun | Guangdong General Hospital |
Huang, Meiping | Department of Catheterization Lab, Guangdong Cardiovascular Inst |
Zhuang, Jian | Department of Cardiac Surgery, Guangdong Cardiovascular Institut |
Xie, Bike | Kneron |
Liu, Chun-Chen | Kneron |
Shi, Yiyu | University of Notre Dame |
Keywords: Machine learning, Image enhancement/restoration(noise and artifact reduction), Computed tomography (CT)
Abstract: CT image denoising can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data. Our detailed analysis of CCADN raises a number of interesting questions. For example, if the noise is large leading to significant difference between domain X and domain Y, can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle-consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency. The global cycle-consistency couples all generators together to model the whole denoising process, while the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms the state-of-the-art.
|
|
MoAbPo Poster Session, Oakdale Foyer Coral Foyer |
|
Monday Poster AM |
|
|
|
10:30-12:00, Subsession MoAbPo-01, Oakdale Foyer Coral Foyer | |
CT Reconstruction Methods Poster Session, 9 papers |
|
10:30-12:00, Subsession MoAbPo-02, Oakdale Foyer Coral Foyer | |
Image Quantification for Visualization and Surgical Planning Poster Session, 5 papers |
|
10:30-12:00, Subsession MoAbPo-03, Oakdale Foyer Coral Foyer | |
Image Registration Poster Session, 10 papers |
|
10:30-12:00, Subsession MoAbPo-04, Oakdale Foyer Coral Foyer | |
Bone and Skeletal Imaging Poster Session, 6 papers |
|
10:30-12:00, Subsession MoAbPo-05, Oakdale Foyer Coral Foyer | |
Brain Segmentation and Characterization II Poster Session, 8 papers |
|
10:30-12:00, Subsession MoAbPo-06, Oakdale Foyer Coral Foyer | |
Lung, Chest, and Airways Image Analysis I Poster Session, 5 papers |
|
10:30-12:00, Subsession MoAbPo-07, Oakdale Foyer Coral Foyer | |
Heart Imaging and Analysis I Poster Session, 7 papers |
|
10:30-12:00, Subsession MoAbPo-08, Oakdale Foyer Coral Foyer | |
Image Enhancement, Denoising, Deconvolution Poster Session, 7 papers |
|
10:30-12:00, Subsession MoAbPo-09, Oakdale Foyer Coral Foyer | |
Tracking and Motion Estimation in Microscopy Poster Session, 4 papers |
|
10:30-12:00, Subsession MoAbPo-10, Oakdale Foyer Coral Foyer | |
Abstract Posters: Brain Connectivity and Functional Imaging Poster Session, 7 papers |
|
10:30-12:00, Subsession MoAbPo-11, Oakdale Foyer Coral Foyer | |
Abstract Posters: Clinical Applications and Biomedical Modeling Poster Session, 5 papers |
|
MoAbPo-01 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
CT Reconstruction Methods |
|
|
Chair: Peyrin, Francoise | Université De Lyon, CNRS UMR 5220, INSERM U1206, INSA Lyon |
Co-Chair: Bresler, Yoram | University of Illinois at Urbana-Champaign |
|
10:30-12:00, Paper MoAbPo-01.1 | Add to My Program |
A Completion Network for Reconstruction from Compressed Acquisition |
|
Ducros, Nicolas | INSA Lyon, CREATIS |
Lorente Mur, Antonio | INSA Lyon, CREATIS |
Peyrin, Francoise | Université De Lyon, CNRS UMR 5220, INSERM U1206, INSA Lyon |
Keywords: Computational Imaging, Image reconstruction - analytical & iterative methods, Machine learning
Abstract: We consider here the problem of reconstructing an image from a few linear measurements. This problem has many biomedical applications, such as computerized tomography, magnetic resonance imaging and optical microscopy. While this problem has long been solved by compressed sensing methods, these are now outperformed by deep-learning approaches. However, understanding why a given network architecture works well is still an open question. In this study, we proposed to interpret the reconstruction problem as a Bayesian completion problem where the missing measurements are estimated from those acquired. From this point of view, a network emerges that includes a fully connected layer that provides the best linear completion scheme. This network has a lot fewer parameters to learn than direct networks, and it trains more rapidly than image-domain networks that correct pseudo inverse solutions. Although, this study focuses on computational optics, it might provide some insight for inverse problems that have similar formulations.
|
|
10:30-12:00, Paper MoAbPo-01.2 | Add to My Program |
Gram Filtering and Sinogram Interpolation for Pixel-Basis in Parallel-Beam X-Ray CT Reconstruction |
|
Shu, Ziyu | University of Florida |
Entezari, Alireza | University of Florida |
Keywords: Computed tomography (CT), X-ray imaging, Image reconstruction - analytical & iterative methods
Abstract: The key aspect of parallel-beam X-ray CT is forward and back projection, but its computational burden continues to be an obstacle for applications. We propose a method to improve the performance of related algorithms by calculating the Gram filter exactly and interpolating the sinogram signal optimally. In addition, the detector blur effect can be included in our model efficiently. The improvements in speed and quality for back projection and iterative reconstruction are shown in our experiments on both analytical phantoms and real CT images.
|
|
10:30-12:00, Paper MoAbPo-01.3 | Add to My Program |
A New Spatially Adaptive Tv Regularization for Digital Breast Tomosynthesis |
|
Sghaier, Maissa | CVN, CentraleSupélec, Inria, Univ. Paris Saclay, France |
Chouzenoux, Emilie | Ligm - Cnrs |
Pesquet, Jean-Christophe | CentraleSupélec, INRIA Saclay, University Paris Saclay |
Muller, Serge | GE Healthcare |
Keywords: X-ray imaging, Breast, Image reconstruction - analytical & iterative methods
Abstract: Digital breast tomosynthesis images provide volumetric morphological information of the breast helping physicians to detect malign lesions. In this work, we propose a new spatially adaptive total variation (SATV) regularization function allowing to preserve adequately the shape of small objects such as microcalcifications while ensuring a high quality restoration of the background tissues. First, an original formulation for the weighted gradient field is introduced, that efficiently incorporates prior knowledge on the location of small objects. Then, we derive our SATV regularization, and integrate it in a novel 3D reconstruction approach for DBT. Experimental results carried out on both phantom and clinical data show the great interest of our method for the recovery of DBT volumes showing small lesions.
|
|
10:30-12:00, Paper MoAbPo-01.4 | Add to My Program |
Adaptive Prior Patch Size Based Sparse-View CT Reconstruction Algorithm |
|
Zhang, Xinzhen | School of Biomedical Engineering, Shanghai Jiao Tong University |
Zhou, Yufu | Shanghai Jiao Tong University |
Zhang, Weikang | Shanghai Jiao Tong University |
Sun, Jianqi | Shanghai Jiao Tong University |
Zhao, Jun | Shanghai Jiao Tong University |
Keywords: Image reconstruction - analytical & iterative methods, Computed tomography (CT), Compressive sensing & sampling
Abstract: Compressed sensing (CS) reconstruction methods employing sparsity regularization and prior constraints are successfully applied in sparse-view computed tomography (CT) reconstruction and yield high-quality images compared with other low-dose imaging methods. In this paper, we proposed an adaptive prior patch size (APPS) strategy in sparse-view CT reconstruction. The method adopts sparse representation (SR) using adaptive patch size instead of a constant one to synthesize prior image of higher quality, because the optimal patch size should vary from each distribution range of local feature. The simulation experiments show that the proposed strategy has the better performance than the method with fixed patch size in terms of artifact reduction and edge preservation.
|
|
10:30-12:00, Paper MoAbPo-01.5 | Add to My Program |
Unsupervised Cone-Beam Artifact Removal Using CycleGAN and Spectral Blending for Adaptive Radiotherapy |
|
Park, Sangjoon | Korea Advanced Institute of Science and Technology |
Ye, Jong Chul | Korea Advanced Inst of Science & Tech |
Keywords: Computed tomography (CT), Radiation therapy, planing and treatment, Machine learning
Abstract: Cone-beam computed tomography (CBCT) used in radiotherapy (RT) has the advantage of being taken daily, but is difficult to use for purposes other than patient setup because of the poor image quality compared to fan-beam computed tomography (CT). Even though several methods have been proposed including the deformable image registration method to improve the quality of CBCT, the outcomes have not yet been satisfactory. Recently, deep learning has shown to produce high-quality results for various image-to-image translation tasks, suggesting the possibility of being an effective tool for converting CBCT into CT. In the field of RT, however, it may not always be possible to obtain paired datasets which consist of exactly matching CBCT and CT images. This study aimed to develop a novel, unsupervised deep-learning algorithm, which requires only unpaired CBCT and fan-beam CT images to remove the cone-beam artifact and thereby improve the quality of CBCT. Specifically, two cycle consistency generative adversarial networks (CycleGAN) were trained in the sagittal and coronal directions, and the generated results along those directions were then combined using spectral blending technique. To evaluate our methods, we applied it to American Association of Physicists in Medicine dataset. The experimental results show that our method outperforms the existing CylceGAN-based method both qualitatively and quantitatively.
|
|
10:30-12:00, Paper MoAbPo-01.6 | Add to My Program |
Cone-Angle Artifact Removal Using Differentiated Backprojection Domain Deep Learning |
|
Kim, Junyoung | Korea Advanced Inst of Science & Tech |
Han, Yoseob | Los Alamos National Laboratory (LANL) |
Ye, Jong Chul | Korea Advanced Inst of Science & Tech |
Keywords: Computed tomography (CT), Image enhancement/restoration(noise and artifact reduction), Machine learning
Abstract: For circular trajectory conebeam CT, Feldkamp, Davis, and Kress (FDK) algorithm is widely used for its reconstruction. However, the existence of cone-angle artifacts is fatal for the quality when using this algorithm. There are several model-based iterative reconstruction methods for the cone-angle artifacts removal, but these algorithms usually require repeated applications of computational expensive forward and backward.In this paper, we propose a novel deep learning approach for cone-angle artifact removal on differentiated backprojection domain, which performs a data-driven inversion of an ill-posed deconvolution problem related to the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined by a spectral blending technique to minimize the spectral leakage. Experimental results show that our method provides superior performance to the existing methods.
|
|
10:30-12:00, Paper MoAbPo-01.7 | Add to My Program |
A List-Mode Osem-Based Attenuation and Scatter Compensation Method for Spect |
|
Rahman, Md Ashequr | Washington University in St. Louis |
Laforest, Richard | Washington University Medical School |
Jha, Abhinav | Washington University in St. Louis |
Keywords: Nuclear imaging (e.g. PET, SPECT), Image reconstruction - analytical & iterative methods, Brain
Abstract: Reliable attenuation and scatter compensation (ASC) is a pre-requisite for quantification tasks and beneficial for visual interpretation tasks in single-photon emission computed tomography (SPECT) imaging. For this purpose, we develop a SPECT reconstruction method that uses the entire SPECT emission data, i.e. data in both the photopeak and scattered windows, and acquired in list-mode format, to perform ASC. Further, the method uses the energy attribute of the detected photons while performing the reconstruction. We implemented a GPU-version of this method using an ordered subsets expectation maximization (OSEM) algorithm for faster convergence and quicker computation. The performance of the method was objectively evaluated using realistic simulation studies on the task of estimating activity uptake in the caudate, putamen, and globus pallidus regions of the brain in a dopamine transporter (DaT)-SPECT study. The method yielded improved performance in terms ob bias, variance, and mean square error compared to existing ASC techniques in quantifying activity in all three regions. Overall, the results provide promising evidence of the potential of the proposed technique for ASC in SPECT imaging.
|
|
10:30-12:00, Paper MoAbPo-01.8 | Add to My Program |
Image-Domain Material Decomposition Using an Iterative Neural Network for Dual-Energy CT |
|
Li, Zhipeng | Shanghai Jiao Tong University |
Chun, Il Yong | University of Hawaii at Manoa |
Long, Yong | Shanghai Jiao Tong University |
Keywords: Computed tomography (CT), Computational Imaging
Abstract: Image-domain material decomposition is susceptible to noise and artifacts in dual-energy CT (DECT) attenuation images. To obtain high quality material images from DECT, data-driven methods are attracting widespread attention. Iterative neural network (INN) approaches achieved high image reconstruction quality and low generalization error in several inverse imaging problems. BCD-Net is an INN of which architecture is constructed by generalizing a block coordinate descent (BCD) algorithm that solves model-based image reconstruction using learned convolutional regularizers. We propose a new INN architecture for DECT material decomposition by replacing a model-based image reconstruction module of BCD-Net with a model-based image decomposition (MBID) module. Experiments with the extended cardiactorso (XCAT) phantom and patient data show that the proposed method greatly improves image decomposition quality compared to a conventional MBID method using an edge-preserving hyperbola regularizer and a state-of-the-art learned MBID method that uses different pre-learned sparsifying transforms for different materials.
|
|
10:30-12:00, Paper MoAbPo-01.9 | Add to My Program |
Digital Breast Tomosynthesis Reconstruction with Deep Neural Network for Improved Contrast and In-Depth Resolution |
|
Wu, Dufan | Massachusetts General Hospital and Harvard Medical School |
Kim, Kyungsang | Massachusetts General Hospital and Harvard Medical School |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Keywords: Image reconstruction - analytical & iterative methods, Machine learning, Breast
Abstract: Digital breast tomosynthesis (DBT) provides 3D reconstruction which reduces the superposition and overlapping of breast tissues compared to mammography, leading to increased sensitivity and specificity. However, due to the limited angular sampling, DBT images are still accompanied with severe artifacts and limited in-depth resolution. In this paper, we proposed a deep learning-based DBT reconstruction method to mitigate the limited angular artifacts and improve in-depth resolution. An unroll-type neural network was used with decoupled training for each unroll to reduce training-time computational cost. A novel region of interest loss on inserted microcalcifications was further proposed to improve the spatial resolution and contrast of the microcalcifications. The network was trained and tested on 176 realistic breast phantoms, and improved in-plane contrast (3.17 versus 0.43, p<0.01) and in-depth resolution (1.19 mm versus 4.96 mm, p <0.01) was demonstrated by the proposed method compare to iterative reconstruction.
|
|
MoAbPo-02 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Image Quantification for Visualization and Surgical Planning |
|
|
Chair: Gonzalez Ballester, Miguel Angel | ICREA & Universitat Pompeu Fabra |
|
10:30-12:00, Paper MoAbPo-02.1 | Add to My Program |
Panoramic View of Human Jaw under Ambiguity Intraoral Camera Movement |
|
Ghanoum, Mohamad | Mohamad Ghanoum |
Ali, Asem | University of Louisville |
Elshazly, Salwa | University of Louisville |
Alkabbany, Islam | University of Louisville |
Farag, Aly A. | University of Louisville |
Keywords: Image registration, Tooth
Abstract: Intra-oral cameras are recommended tools in every dental clinic. The intra-oral camera allows dentists to capture images of difficult-to-reach areas in the mouth. Oral dental applications based on visual data suffer from various challenges such as the environmental effects and the unstable camera movement. We propose an approach, which addresses these challenges, to stitch human teeth images that are captured by an intra-oral camera. First, due to the low density of texture on the tooth surface, normal maps are estimated to reveals the impacted geometric properties of each image inside an area, boundary, and shape. Normal maps are rich with features, which can be used in the stitching process. Second, we investigate the unrestricted camera movement problem. The camera may be moved along the jaw curve with different angles and distances due to handshaking. To overcome this problem, we test each frame, after warping it, and only correct frames are used to generate the panoramic view. The proposed approach shows a similar performance of other cases, in which the camera movement is restricted.
|
|
10:30-12:00, Paper MoAbPo-02.2 | Add to My Program |
Low-Shot Learning of Automatic Dental Plaque Segmentation Based on Local-To-Global Feature Fusion |
|
Li, Shuai | Beihang University |
Pang, Zhennan | Beihang University |
Song, Wenfeng | Beihang University |
Guo, Yuting | Beihang University |
You, Wenzhe | Peking University School and Hospital of Stomatology |
Hao, Aimin | School of Computer Science and Engineering, Beihang University |
Qin, Hong | StonyBrook University |
Keywords: Image segmentation, Tooth, Endoscopy
Abstract: The early detection of dental plaque could prevent periodontal diseases and dental caries, however, it is difficult to recognize it without the use of medical dyeing reagent due to the low contrast between dental plaque and teeth. To combat this problem, this paper introduces a novel low-shot learning method of the intelligent dental plaque segmentation directly using oral endoscope images. The key contribution is to conduct low-shot learning at the super-pixel level and integrate the super-pixels' global and local features towards better segmentation results. Our rationale is that, super-pixel based CNN feature focuses on the statistical distribution of plaques' color, heat kernel signature (HKS) aims to capture the local-to-global structure relationship in the nearby regions centering around plaque area, and circle-LBP feature depicts the local texture pattern on the plaque area. The experimental results confirm that our method outperforms the state-of-the-art methods based on small scale training datasets, and the user study demonstrates our method is more accurate than conventional manual results delineated by experienced dentists.
|
|
10:30-12:00, Paper MoAbPo-02.3 | Add to My Program |
Tooth Segmentation and Labeling from Digital Dental Casts |
|
Sun, Diya | Key Laboratory of Machine Perception (MOE), Department of Machin |
Pei, Yuru | Peking University |
Song, Guangying | Peking University |
Guo, Yuke | Luoyang Institute of Science and Technology |
Ma, Gengyu | USens Inc |
Xu, Tianmin | Peking University |
Zha, Hongbin | Peking University |
Keywords: Image segmentation, Machine learning, Tooth
Abstract: This paper presents an approach to automatic and accurate segmentation and identification of individual teeth from digital dental casts via deep graph convolutional neural networks. Instead of performing the teeth-gingiva and inter-tooth segmentation in two separate phases, the proposed method enables the simultaneous segmentation and identification of the gingiva and teeth. We perform the vertex-wise feature learning via the feature steered graph convolutional neural network (FeaStNet) [1] that dynamically updates the mapping between convolutional filters and local patches from digital dental casts. The proposed framework handles the tightly intertwined segmentation and labeling tasks with a novel constraint on crown shape distribution and concave contours to remove ambiguous labeling of neighboring teeth. We further enforce the smooth segmentation using the pairwise relationship in local patches to penalize rough and inaccurate region boundaries and regularize the vertex-wise labeling in the training process. The qualitative and quantitative evaluations on the digital dental casts obtained in the clinical orthodontics demonstrate that the proposed method achieves efficient and accurate tooth segmentation and produces performance improvements to the state-of-the-art.
|
|
10:30-12:00, Paper MoAbPo-02.4 | Add to My Program |
Towards Uncertainty Quantification for Electrode Bending Prediction in Stereotactic Neurosurgery |
|
Granados, Alejandro | King's College London |
Lucena, Oeslle | King's College London |
Vakharia, Vejay | National Hospital of Neurology and Neurosurgery |
Miserocchi, Anna | National Hospital of Neurology and Neurosurgery |
McEvoy, Andrew W | National Hospital of Neurology and Neurosurgery |
Vos, Sjoerd | University College London |
Rodionov, Roman | National Hospital of Neurology and Neurosurgery |
Duncan, John S | National Hospital for Neurology and Neurosurgery |
Sparks, Rachel | University College of London |
Ourselin, Sebastien | University College London |
Keywords: Surgical guidance/navigation, Quantification and estimation, Brain
Abstract: Implantation accuracy of electrodes during stereotactic neurosurgery is necessary to ensure safety and efficacy. However, electrodes deflect from planned trajectories. Although mechanical models and data-driven approaches have been proposed for trajectory prediction, they lack to report uncertainty of the predictions. We propose to use Monte Carlo (MC) dropout on neural networks to quantify uncertainty of predicted electrode local displacement. We compute image features of 23 stereoelectroencephalography cases (241 electrodes) and use them as inputs to a neural network to regress electrode local displacement. We use MC dropout with 200 stochastic passes to quantify uncertainty of predictions. To validate our approach, we define a baseline model without dropout and compare it to a stochastic model using 10-fold cross-validation. Given a starting planned trajectory, we predicted electrode bending using inferred local displacement at the tip via simulation. We found MC dropout performed better than a non-stochastic baseline model and provided confidence intervals along the predicted trajectory of electrodes. We believe this approach facilitates better decision making for electrode bending prediction in surgical planning.
|
|
10:30-12:00, Paper MoAbPo-02.5 | Add to My Program |
A Deep Learning-Facilitated Radiomics Solution for the Prediction of Lung Lesion Shrinkage in Non-Small Cell Lung Cancer Trials |
|
Chen, Antong | Merck & Co., Inc |
Saouaf, Jennifer | Merck & Co., Inc |
Zhou, Bo | Carnegie Mellon University |
Crawford, Randolph | Merck |
Yuan, Jianda | Merck & Co., Inc |
Ma, Junshui | Merck |
Baumgartner, Richard | Merck & Co., Inc |
Wang, Shubing | Merck & Co., Inc |
Goldmacher, Gregory | Merck & Co., Inc |
Keywords: Computational Imaging, Machine learning, Computed tomography (CT)
Abstract: Herein we propose a deep learning-based approach for the prediction of lung lesion response based on radiomic features extracted from clinical CT scans of patients in non-small cell lung cancer trials. The approach starts with the classification of lung lesions from the set of primary and metastatic lesions at various anatomic locations. Focusing on the lung lesions, we perform automatic segmentation to extract their 3D volumes. Radiomic features are then extracted from the lesion on the pre-treatment scan and the first follow-up scan to predict which lesions will shrink at least 30% in diameter during treatment (either pembrolizumab or combinations of chemotherapy and pembrolizumab), which is defined as a partial response by the Response Evaluation Criteria In Solid Tumors (RECIST) guidelines. A 5-fold cross validation on the training set led to an AUC of 0.84 ± 0.03, and the prediction on the testing dataset reached AUC of 0.73 ± 0.02 for the outcome of 30% diameter shrinkage.
|
|
MoAbPo-03 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Image Registration |
|
|
Chair: Malandain, Gregoire | INRIA |
|
10:30-12:00, Paper MoAbPo-03.1 | Add to My Program |
Accelerating the Registration of Image Sequences by Spatio-Temporal Multilevel Strategies |
|
Aggrawal, Hari Om | University of Luebeck |
Modersitzki, Jan | University of Lubeck |
Keywords: Image registration, Integration of multiscale information, Optical coherence tomography
Abstract: Multilevel strategies are an integral part of many image registration algorithms. These strategies are very well-known for avoiding undesirable local minima, providing an outstanding initial guess, and reducing overall computation time. State-of-the-art multilevel strategies build a hierarchy of discretization in the spatial dimensions. In this paper, we present a spatio-temporal strategy, where we introduce a hierarchical discretization in the temporal dimension at each spatial level. This strategy is suitable for a motion estimation problem where the motion is assumed smooth over time. Our strategy exploits the temporal smoothness among image frames by following a predictor-corrector approach. The strategy predicts the motion by a novel interpolation method and later corrects it by registration. The prediction step provides a good initial guess for the correction step, hence reduces the overall computational time for registration. The acceleration is achieved by a factor of 2.5 on average, over the state-of-the-art multilevel methods on three examined optical coherence tomography datasets.
|
|
10:30-12:00, Paper MoAbPo-03.2 | Add to My Program |
Diffeomorphic Registration for Retinotopic Mapping Via Quasiconformal Mapping |
|
Tu, Yanshuai | Arizona State University |
Ta, Duyan | Arizona State University |
Gu, David Xianfeng | State University of New York at Stony Brook |
Lu, Zhonglin | New York Univ |
Wang, Yalin | Arizona State University |
Keywords: Brain, fMRI analysis, Atlases
Abstract: Human visual cortex is organized into several functional regions/areas. Identifying these visual areas of the human brain (i.e., V1, V2, V4, etc) is an important topic in neurophysiology and vision science. Retinotopic mapping via functional magnetic resonance imaging (fMRI) provides a non-invasive way of defining the boundaries of the visual areas. It is well known from neurophysiology studies that retinotopic mapping is diffeomorphic within each local area (i.e. locally smooth, differentiable, and invertible). However, due to the low signal-noise ratio of fMRI, the retinotopic maps from fMRI are often not diffeomorphic, making it difficult to delineate the boundaries of visual areas. The purpose of this work is to generate diffeomorphic retinotopic maps and improve the accuracy of the retinotopic atlas from fMRI measurements through the development of a specifically designed registration procedure. Although existing cortical surface registration methods are very advanced, most of them have not fully utilized the features of retinotopic mapping. By considering those features, we form a mathematical model for registration and solve it with numerical methods. We compared our registration with several popular methods on synthetic data. The results demonstrate that the proposed registration is superior to conventional methods for the registration of retinotopic maps. The application of our method to a real retinotopic mapping dataset also resulted in much smaller registration errors.
|
|
10:30-12:00, Paper MoAbPo-03.3 | Add to My Program |
Automatic Multimodal Registration Via Intraprocedural Cone-Beam CT Segmentation Using MRI Distance Maps |
|
Augenfeld, Zachary | Yale University |
Lin, MingDe | Visage Imaging, Inc |
Chapiro, Julius | Yale University |
Duncan, James | Yale University |
Keywords: Image-guided treatment, Image registration, Multi-modality fusion
Abstract: Accurate multimodal registration is integral when fusing spatial information from two or more medical images. Specifically, image-guided procedures involve acquiring images under less than ideal conditions, so the ability to map segmented regions of interest from a diagnostic image into the intraprocedural imaging domain becomes especially important. However, standard methods of multimodal registration may not be feasible, depending on intraprocedural image quality. In order to deal with such cases, in this paper we propose a series of two convolutional neural networks performing segmentation of the target image, the predicted outputs of which are then utilized as inputs to a feature-matching registration framework. Each network is trained in a supervised fashion, with robust point matching (RPM) being performed interstitially to generate signed distance maps to be included in training the second network. By supplementing the target image segmentation with dense spatial information derived from the source image, the accuracy of both segmentation and registration is improved. This provides the first fully automated framework for registering diagnostic MRI to intraprocedural cone-beam CT (CBCT) images used to guide transcatheter arterial chemoembolization (TACE), a standard interventional therapy used to treat primary liver cancer.
|
|
10:30-12:00, Paper MoAbPo-03.4 | Add to My Program |
A Generalizable Framework for Domain-Specific Nonrigid Registration: Application to Cardiac Ultrasound |
|
Peoples, Jacob | Queen's University |
Ellis, Randy | Queen's University |
Keywords: Image registration, Optimization method, Ultrasound
Abstract: Many applications of nonrigid point set registration could benefit from a domain-specific model of allowed deformations. We observe that registration methods using mixture models optimize a differentiable log-likelihood function and are thus amenable to gradient-based optimization. In theory, this allows optimization of any transformations that are expressed as arbitrarily nested differentiable functions. In practice such optimization problems are readily handled with modern machine learning tools. We demonstrate, in experiments on synthetic data generated from a model of the left cardiac ventricle, that complex nested transformations can be robustly optimized using this approach. As a realistic application, we also use the method to propagate the model through an entire cardiac ultrasound sequence. We conclude that this approach, which works with both points and oriented points, provides an easily generalizable framework in which complex, application-specific transformation models may be constructed and optimized.
|
|
10:30-12:00, Paper MoAbPo-03.5 | Add to My Program |
Non-Rigid 2d-3d Registration Using Convolutional Autoencoders |
|
Li, Peixin | Peking University |
Pei, Yuru | Peking University |
Guo, Yuke | Luoyang Institute of Science and Technology |
Ma, Gengyu | USens Inc |
Xu, Tianmin | Peking University |
Zha, Hongbin | Peking University |
Keywords: Image registration, X-ray imaging, Machine learning
Abstract: In this paper, we propose a novel neural network-based framework for the non-rigid 2D-3D registration of the lateral cephalogram and the volumetric cone-beam CT (CBCT) images. The task is formulated as an embedding problem, where we utilize the statistical volumetric representation and embed the X-ray image to a code vector regarding the non-rigid volumetric deformations. In particular, we build a deep ResNet-based encoder to infer the code vector from the input X-ray image. We design a decoder to generate digitally reconstructed radiographs (DRRs) from the non-rigidly deformed volumetric image determined by the code vector. The parameters of the encoder are optimized by minimizing the difference between synthetic DRRs and input X-ray images in an unsupervised way. Without geometric constraints from multi-view X-ray images, we exploit structural constraints of the multi-scale feature pyramid in similarity analysis. The training process is unsupervised and does not require paired 2D X-ray images and 3D CBCT images. The system allows constructing a volumetric image from a single X-ray image and realizes the 2D-3D registration between the lateral cephalograms and CBCT images.
|
|
10:30-12:00, Paper MoAbPo-03.6 | Add to My Program |
Liver DCE-MRI Registration Based on Sparse Recovery De-Enhanced Curves |
|
Sun, Yuhang | Southern Medical University |
Feng, Qianjin | Southern Medical University |
Keywords: Image registration, Perfusion imaging, Liver
Abstract: Early diagnosis of liver cancer is particularly important in reducing the high mortality of liver cancer because some symptoms will not appear until the cancer cannot be reversed. Dynamic contrast-enhanced MRI (DCE-MRI) plays an important role in distinguishing benign and malignant tumors by dynamically monitoring the uptake and washout of contrast agent (gadolinium based) in different tissues. However, patient motion during the few minutes of data acquisition may result in the voxel-based mis-correspondence in adjacent time frames. Therefore, DCE-MRI registration is a necessary pre-processing step for motion correction. The challenge of dynamic contrast-enhanced MRI (DCE-MRI) registration is the huge intensity changes caused by contrast agent injections. These changes may lead to the unrealistic deformation of contrast enhancement regions by traditional intensity-based registration algorithms. Our method used to overcome the challenge is based on the idea of remove the intensity enhancement. In this paper, we apply sparse representation to reconstruct the time intensity curves of contrast agent, the dictionary used to sparse represent only includes the information of intensity changes over different time points caused by contrast agent, and then remove the contrast enhancement by simple subtracting operation. The de-enhanced images reshaped from “non-contrast” curves can be easily aligned using traditional registration schemes. The main contribution of our work for the clinical application is to raise the accuracy of the prediction of characteristic parameter which can distinguish the benign and malignant tumor.
|
|
10:30-12:00, Paper MoAbPo-03.7 | Add to My Program |
Software Tool to Read, Represent, Manipulate, and Apply N-Dimensional Spatial Transforms |
|
Esteban, Oscar | Stanford University |
Goncalves, Mathias | Stanford University |
Markiewicz, Christopher J. | Stanford University |
Ghosh, Satrajit S. | McGovern Institute for Brain Research, MIT and Dept. of Otolaryn |
Poldrack, Russell A. | Stanford University |
Keywords: Image registration, Other-method, Computational Imaging
Abstract: Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats precludes the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base (https://github.com/poldracklab/nitransforms). Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
|
|
10:30-12:00, Paper MoAbPo-03.8 | Add to My Program |
An “augmentation-Free” Rotation Invariant Classification Scheme on Point-Cloud and Its Application to Neuroimaging |
|
Yang, Liu | University of California, Berkeley |
Chakraborty, Rudrasis | University of California, Berkeley |
Keywords: Machine learning, Magnetic resonance imaging (MRI)
Abstract: Recent years have witnessed the emergence and increasing popularity of 3D medical imaging techniques with the development of 3D sensors and technology. However, achieving geometric invariance in the processing of 3D medical images is computationally expensive but nonetheless essential due to the presence of possible errors caused by rigid registration techniques. An alternative way to analyze medical imaging is by understanding the 3D shapes represented in terms of point-cloud. Though in the medical imaging community, 3D point-cloud processing is not a ``go-to'' choice, it is a canonical way to preserve rotation invariance. Unfortunately, due to the presence of discrete topology, one can not use the standard convolution operator on point-cloud. {it To the best of our knowledge, the existing ways to do ``convolution'' can not preserve the rotation invariance without explicit data augmentation.} Therefore, we propose a rotation invariant convolution operator by inducing topology from hypersphere. Experimental validation has been performed on publicly available OASIS dataset in terms of classification accuracy between subjects with (without) dementia, demonstrating the usefulness of our proposed method in terms of begin{inparaenum}[bfseries (a)] item model complexity item classification accuracy and last but most important item invariance to rotations.end{inparaenum}
|
|
10:30-12:00, Paper MoAbPo-03.9 | Add to My Program |
Improving Interpretability of 2-D Ultrasound of the Lumbar Spine |
|
Porto, Lucas | University of British Columbia |
Rohling, Robert | UBC |
Keywords: Spine, Image registration, Ultrasound
Abstract: Ultrasound-guided anesthesia uses a safe, portable imaging modality to provide visual feedback during the needle injection. Widespread adoption of ultrasound-guided anesthesia has been primarily limited by a lack of access to advanced ultrasound technology and a lack of ultrasound training for anesthesiologists. We sought to address these limitations by introducing a method that aids the interpretability of cross-sectional ultrasound from conventional (2D) machines. We propose a constrained registration of a 3D active shape model constructed from computerized tomography (CT) scans of the lumbar spine to a specific set of targets automatically extracted from 2D B-mode ultrasound images with machine learning models. The registration results in an overlay of the entire bone cross-section of the lumbar spine onto the ultrasound image. Our proposed registration achieved a mean squared error of 1.4 pm 0.3 mm on a set of 43 ultrasound images, which is smaller than the key anatomical features, suggesting that the overlay is suitable for interpretation.
|
|
10:30-12:00, Paper MoAbPo-03.10 | Add to My Program |
Learning Optimal Shape Representations for Multi-Modal Image Registration |
|
Grossiord, Eloise | Institut De Mathématiques De Toulouse |
Risser, Laurent | CNRS - Institut De Mathematiques De Toulouse |
Ken, Soleakhena | Institut Claudius Regaud, INSERM, UMR825 |
Kanoun, Salim | IUCT Oncopole |
Malgouyres, François | Université De Toulouse |
Keywords: Image registration, Optimization method, Computed tomography (CT)
Abstract: In this work, we present a new strategy for the multi-modal registration of atypical structures with boundaries that are difficult to define in medical imaging (e.g. lymph nodes). Instead of using a standard Mutual Information (MI) similarity metric, we propose to use the combination of MI with the Modality Independent Neighbourhood Descriptors (MIND) that can help enhancing the organs of interest from their adjacent structures. Our key contribution is then to learn the MIND parameters which optimally represent specific registered structures. As we register atypical organs, Neural-Network approaches requiring large databases of annotated training data cannot be used. We rather strongly constrain our learning problem using the MIND formalism, so that the optimal representation of images depends on a limited amount of parameters. In our results, pure MI-based registration is compared with MI-MIND registration on 3D synthetic images and CT/MR images, leading to improved structure overlaps by using MI-MIND. To our knowledge, this is the first time that MIND-MI is evaluated and appears as relevant for multi-modal registration.
|
|
MoAbPo-04 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Bone and Skeletal Imaging |
|
|
|
10:30-12:00, Paper MoAbPo-04.1 | Add to My Program |
A Novel Approach to Vertebral Compression Fracture Detection Using Imitation Learning and Patch Based Convolutional Neural Network |
|
Iyer, Sankaran | University of New South Wales |
Sowmya, Arcot | University of New South Wales |
Blair, Alan | The University of New South Wales |
White, Christopher | Prince of Wales Hospital |
Dawes, Laughlin | Prince of Wales Hospital |
Moses, Daniel | Prince of Wales Hospital |
Keywords: Machine learning, Computed tomography (CT), Computer-aided detection and diagnosis (CAD)
Abstract: Compression Fractures in vertebrae often go undetected clinically due to various reasons. If left untreated, they can lead to severe secondary fractures due to osteoporosis. We present here a novel fully automated approach to the detection of Vertebral Compression Fractures (VCF). The method involves 3D localisation of thoracic and lumbar spine regions using Deep reinforcement Learning and Imitation Learning. The localised region is then split into 2D sagittal slices around the coronal centre. Each slice is further divided into patches, on which a Convolutional Neural Network (CNN) is trained to detect compression fractures. Experiments for localisation achieved an average Jaccard Index/Dice Coefficient score of 74/85% for 144 CT chest images and 77/86% for 132 CT abdomen images. VCF Detection was performed on another 127 chest images and after localisation, resulted in an average fivefold cross validation accuracy of 80%, sensitivity of 79.87% and specificity of 80.73%.
|
|
10:30-12:00, Paper MoAbPo-04.2 | Add to My Program |
Attention-Based CNN for KL Grade Classification: Data from the Osteoarthritis Initiative |
|
Zhang, Bofei | New York University |
Tan, Jimin | New York University |
Cho, Kyunghyun | New York University |
Chang, Gregory | New York University Langone Health |
Deniz, Cem | New York University Langone Heath |
Keywords: Computer-aided detection and diagnosis (CAD), Bone, X-ray imaging
Abstract: Knee osteoarthritis (OA) is a chronic degenerative disorder of joints and it is the most common reason leading to total knee joint replacement. Diagnosis of OA involves subjective judgment on symptoms, medical history, and radiographic readings using Kellgren-Lawrence grade (KL-grade). Deep learning-based methods such as Convolution Neural Networks (CNN) have recently been applied to automatically diagnose radiographic knee OA. In this study, we applied Residual Neural Network (ResNet) to first detect knee joint from radiographs and later combine ResNet with Convolutional Block Attention Module (CBAM) to make a prediction of the KL-grade automatically. The proposed model achieved a multi-class average accuracy of 74.81%, mean squared error of 0.36, and quadratic Kappa score of 0.88, which demonstrates a significant improvement over the published results. The attention maps were analyzed to provide insights on the decision process of the proposed model.
|
|
10:30-12:00, Paper MoAbPo-04.3 | Add to My Program |
Vertebra-Focused Landmark Detection for Scoliosis Assessment |
|
Yi, Jingru | Rutgers University |
Wu, Pengxiang | Rutgers University |
Huang, Qiaoying | Rutgers University |
Qu, Hui | Rutgers University |
Metaxas, Dimitris | Rutgers University |
Keywords: Spine, Computer-aided detection and diagnosis (CAD), X-ray imaging
Abstract: Adolescent idiopathic scoliosis (AIS) is a lifetime disease that arises in children. Accurate estimation of Cobb angles of the scoliosis is essential for clinicians to make diagnosis and treatment decisions. The Cobb angles are measured according to the vertebrae landmarks. Existing regression-based methods for the vertebra landmark detection typically suffer from large dense mapping parameters and inaccurate landmark localization. The segmentation-based methods tend to predict connected or corrupted vertebra masks. In this paper, we propose a novel vertebra-focused landmark detection method. Our model first localizes the vertebra centers, based on which it then traces the four corner landmarks of the vertebra through the learned corner offset. In this way, our method is able to keep the order of the landmarks. The comparison results demonstrate the merits of our method in both Cobb angle measurement and landmark detection on low-contrast and ambiguous X-ray images. Code is available at: url{https://github.com/yijingru/Vertebra-Landmark-Detection}.
|
|
10:30-12:00, Paper MoAbPo-04.4 | Add to My Program |
Segmentation of Bone Vessels in 3d Micro-Ct Images Using the Monogenic Signal Phase and Watershed |
|
Xu, Hao | Univ Lyon, CNRS UMR 5220, Inserm U1206, INSA Lyon, Université Cl |
Langer, Max | U. De Lyon, CNRS UMR 5220, Inserm U1044, INSA-Lyon, U. Lyon 1 |
Peyrin, Francoise | Université De Lyon, CNRS UMR 5220, INSERM U1206, INSA Lyon |
Keywords: Computed tomography (CT), Image segmentation, Vessels
Abstract: We propose an algorithm based on marker-controlled watershed and the monogenic signal phase asymmetry for the segmentation of bone and micro-vessels in mouse bone. The images are acquired using synchrotron radiation micro-computed tomography (SR-µCT). The marker image is generated with hysteresis thresholding and morphological filters. The control surface is generated using the phase asymmetry of the monogenic signal in order to detect edge-like structures only, as well as improving detection in low contrast areas, such as bone-vessel interfaces. The quality of segmentation is evaluated by comparing to manually segmented images using the Dice coefficient. The proposed method shows substantial improvement compared to a previously proposed method based on hysteresis thresholding, as well as compared to watershed using the gradient image as control surface. The algorithm was applied to images of healthy and metastatic bone, permitting quantification of both bone and vessel structures.
|
|
10:30-12:00, Paper MoAbPo-04.5 | Add to My Program |
Back Shape Measurement and Three-Dimensional Reconstruction of Spinal Shape Using One Kinect Sensor |
|
Xu, Zhenda | The Hong Kong Polytechnic University |
Zhang, Yong | University of Electronic Science and Technology of China |
Fu, Chunyang | University of Electronic Science and Technology of China |
Liu, Limin | West China Hospital, Sichuan University |
Chen, Cong | The University of Hong Kong |
Xu, Wenchao | The Hong Kong Polytechnic University |
Guo, Song | The Hong Kong Polytechnic University |
Keywords: Computer-aided detection and diagnosis (CAD), Spine, Other-modality
Abstract: Spinal screening relies mainly on direct clinical diagnosis or X-ray examination (which generates harmful radioactive exposure to human body). In general, the lack of knowledge in this area will prevent parents to discover adolescents’ spinal deformation problems at children’s early age. Therefore, we propose a low-cost, easy to use, radiation free and high accuracy method to quickly reconstruct the three-dimensional shape of the spine, which can be used to evaluation of spinal deformation. Firstly, the depth images collected by Kinect sensor are transformed into three-dimensional point clouds. Then, the features of anatomic landmark points and spinous processes (SP) line are classified and extracted. Finally, the correlation model of SP line and spine midline is established to reconstruct the spine. The results show that the proposed method can extract anatomic landmark points and evaluate scoliosis accurately (average RMS error of 5 mm and 3 degree), which is feasible and promising.
|
|
10:30-12:00, Paper MoAbPo-04.6 | Add to My Program |
Towards Shape-Based Knee Osteoarthritis Classification Using Graph Convolutional Networks |
|
von Tycowicz, Christoph | Zuse Institute Berlin |
Keywords: Shape analysis, Machine learning, Bone
Abstract: We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
|
|
MoAbPo-05 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Brain Segmentation and Characterization II |
|
|
Chair: Caballero Gaudes, Cesar | Basque Center on Cognition, Brain and Language |
Co-Chair: Guevara, Pamela | Universidad De Concepción |
|
10:30-12:00, Paper MoAbPo-05.1 | Add to My Program |
Topology Highlights Neural Deficits of Post-Stroke Aphasia Patients |
|
Wang, Yuan | University of South Carolina |
Behroozmand, Roozbeh | University of South Carolina |
Johnson, Lorelei Phillip | University of South Carolina |
Fridriksson, Julius | University of South Carolina |
Keywords: Probabilistic and statistical models & methods, Brain, EEG & MEG
Abstract: Statistical inference of topological features decoded by persistent homology, a topological data analysis (TDA) algorithm, has been found to reveal patterns in electroencephalographic (EEG) signals that are not captured by standard temporal and spectral analysis. However, a potential challenge for applying topological inference to large-scale EEG data is the ambiguity of performing statistical inference and computational bottleneck. To address this problem, we advance a unified permutation-based inference framework for testing statistical difference in the topological feature persistence landscape (PL) of multi-trial EEG signals. In this study, we apply the framework to compare the PLs in EEG signals recorded in participants with aphasia vs. a matched control group during altered auditory feedback tasks.
|
|
10:30-12:00, Paper MoAbPo-05.2 | Add to My Program |
Automatic Brain Organ Segmentation with 3d Fully Convolutional Neural Network for Radiation Therapy Treatment Planning |
|
Duanmu, Hongyi | Stony Brook University |
Kim, Jinkoo | Stony Brook University Hospital |
Kanakaraj, Praitayini | Vanderbilt University |
Wang, Andrew Haitian | Stony Brook University |
Joshua, John | Stony Brook University |
Kong, Jun | Georgia State University |
Wang, Fusheng | Stony Brook University |
Keywords: Image segmentation, Computed tomography (CT), Brain
Abstract: 3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes --- up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for the automatic segmentation of brain organs. BrainSetNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.
|
|
10:30-12:00, Paper MoAbPo-05.3 | Add to My Program |
Wnet: An End-To-End Atlas-Guided and Boundary-Enhanced Network for Medical Image Segmentation |
|
Huang, Huimin | Zhejiang University |
Lin, Lanfen | Zhejiang University |
Tong, Ruofeng | Zhejiang University |
Hu, Hongjie | Sir Run Run Shaw Hospital |
Zhang, Qiaowei | Sir Run Run Shaw Hospital, Zhejiang University |
Iwamoto, Yutaro | Ritsumeikan University |
Han, Xianhua | Ritsumeikan University |
Chen, Yen-Wei | Ritsumeikan University |
Wu, Jian | Zhejiang University |
Keywords: Image segmentation, Atlases, Computed tomography (CT)
Abstract: Medical image segmentation is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of the complex background and fuzzy boundary. To tackle these issues, we propose a double U-shape-based architecture named WNet, which is capable of capturing exact positions as well as sharpening their boundary. We first build an atlas-guided segmentation network (AGSN) to obtain a position-aware segmentation map by incorporating prior knowledge on human anatomy. We further devise a boundary-enhanced refinement network (BERN) to yield a clear boundary by hybridizing a Multi-scale Structure Similarity (MS-SSIM) loss function and making full use of refinement at training and inference in an end-to-end way. Experimental results show that the proposed WNet can accurately capture an organ with sharpened details and hence improves the performance on two datasets compared to the previous state-of-the-arts. Index Terms—Probabilistic atlas, Atlas-guided segmentation network, Boundary-enhanced refinement network.
|
|
10:30-12:00, Paper MoAbPo-05.4 | Add to My Program |
Learning Probabilistic Fusion of Multilabel Lesion Contours |
|
Cohen, Gal | Tel-Aviv University |
Greenspan, Hayit K. | Tel Aviv University |
Goldberger, Jacob | Bar-Ilan University |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Brain
Abstract: Supervised machine learning algorithms, especially in the medical domain, are affected by considerable ambiguity in expert markings, primarily in proximity to lesion contours. In this study we address the case where the experts opinion for those ambiguous areas is considered as a distribution over the possible values. We propose a novel method that modifies the experts’ distributional opinion at ambiguous areas by fusing their markings based on their sensitivity and specificity. The algorithm can be applied at the end of any label fusion algorithm that can handle soft values. The algorithm was applied to obtain consensus from soft Multiple Sclerosis (MS) segmentation masks. Soft MS segmentations are constructed from manual binary delineations by including lesion surrounding voxels in the segmentation mask with a reduced confidence weight. The method was evaluated on the MICCAI 2016 challenge dataset, and outperformed previous methods.
|
|
10:30-12:00, Paper MoAbPo-05.5 | Add to My Program |
Segmentation-Based Method Combined with Dynamic Programming for Brain Midline Delineation |
|
Wang, Shen | Peking University |
Liang, Kongming | PKU |
Pan, Chengwei | Deepwise AI Lab |
Ye, Chuyang | Beijing Institute of Technology |
Li, Xiuli | Deepwise Inc |
Liu, Feng | Deepwise Healthcare |
Yu, Yizhou | Deepwise Healthcare |
Wang, Yizhou | Peking University |
Keywords: Brain, Image segmentation, Computed tomography (CT)
Abstract: The midline related pathological image features are crucial for evaluating the severity of brain compression caused by stroke or traumatic brain injury (TBI). The automated midline delineation not only improves the assessment and clinical decision making for patients with stroke symptoms or head trauma but also reduces the time of diagnosis. Nevertheless, most of the previous methods model the midline by localizing the anatomical points, which are hard to detect or even missing in severe cases. In this paper, we formulate the brain midline delineation as a segmentation task and propose a three-stage framework. The proposed framework firstly aligns an input CT image into the standard space. Then, the aligned image is processed by a midline detection network (MD-Net) integrated with the CoordConv Layer and Cascade AtrousCconv Module to obtain the probability map. Finally, we formulate the optimal midline selection as a pathfinding problem to solve the problem of the discontinuity of midline delineation. Experimental results show that our proposed framework can achieve superior performance on one in-house dataset and one public dataset.
|
|
10:30-12:00, Paper MoAbPo-05.6 | Add to My Program |
Residual Simplified Reference Tissue Model with Covariance Estimation |
|
Kim, Kyungsang | Massachusetts General Hospital and Harvard Medical School |
Hong, Inki | Siemens |
Son, Young Don | Gachon University |
Kim, Jong-Hoon | Gil Medical Center, Gachon University College of Medicine |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Keywords: Nuclear imaging (e.g. PET, SPECT), Brain, Other-method
Abstract: The simplified reference tissue model (SRTM) can robustly estimate binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP, so-called parametric image, is more useful than the region of interest (ROI) based estimation of BP, it is challenging to calculate the accurate parametric image due to lower signal-to-noise ratio (SNR) of dynamic PET images. To achieve reliable parametric imaging, temporal images are commonly smoothed prior to the kinetic parameter estimation, which degrades the resolution significantly. To address the problem, we propose a residual simplified reference tissue model (ResSRTM) using an approximate covariance matrix to robustly compute the parametric image with a high resolution. We define the residual dynamic data as full data except for each frame data, which has higher SNR and can achieve the accurate estimation of parametric image. Since dynamic images have correlations across temporal frames, we propose an approximate covariance matrix using neighbor voxels by assuming the noise statistics of neighbors are similar. In phantom simulation and real experiments, we demonstrate that the proposed method outperforms the conventional SRTM method.
|
|
10:30-12:00, Paper MoAbPo-05.7 | Add to My Program |
Scanner Invariant Multiple Sclerosis Lesion Segmentation from MRI |
|
Aslani, Shahab | Istituto Italiano Di Tecnologia (IIT) |
Murino, Vittorio | Istituto Italiano Di Tecnologia |
Dayan, Michael | Istituto Italiano Di Tecnologia |
Tam, Roger | University of British Columbia |
Sona, Diego | Istituto Italiano Di Tecnologia (IIT) |
Hamarneh, Ghassan | Simon Fraser University |
Keywords: Image segmentation, Brain, Magnetic resonance imaging (MRI)
Abstract: This paper presents a simple and effective generalization method for magnetic resonance imaging (MRI) segmentation when data is collected from multiple MRI scanning sites and as a consequence is affected by (site-)domain shifts. We propose to integrate a traditional encoder-decoder network with a regularization network. This added network includes an auxiliary loss term which is responsible for the reduction of the domain shift problem and for the resulting improved generalization. The proposed method was evaluated on multiple sclerosis lesion segmentation from MRI data. We tested the proposed model on an in-house clinical dataset including 117 patients from 56 different scanning sites. In the experiments, our method showed better generalization performance than other baseline networks.
|
|
10:30-12:00, Paper MoAbPo-05.8 | Add to My Program |
Brain Lesion Detection Using a Robust Variational Autoencoder and Transfer Learning |
|
Akrami, Haleh | University of Southern California |
Joshi, Anand | University of Southern California |
Li, Jian | University of Southern California |
Aydore, Sergul | Stevens Institute of Technology |
Leahy, Richard | USC |
Keywords: Machine learning, Brain, Magnetic resonance imaging (MRI)
Abstract: Automated brain lesion detection from multi-spectral MR images can assist clinicians by improving sensitivity as well as specificity in lesion studies. Supervised machine learning methods have been successful in lesion detection. However, these methods usually rely on a large number of manually delineated imaging data for specific imaging protocols and parameters and often do not generalize well to other imaging parameters and demographics. Most recently, unsupervised models such as Auto-Encoders have become attractive for lesion detection since they do not need access to manually delineated lesions. Despite the success of unsupervised models, using pre-trained models on an unseen dataset is still a challenge. This difficulty is because the new dataset may use different imaging parameters, demographics, and different pre-processing techniques. Additionally, using a clinical dataset that has anomalies and outliers can make unsupervised learning challenging since the outliers can unduly affect the performance of the learned models. These two difficulties make unsupervised lesion detection a particularly challenging task. The method proposed in this work addresses these issues using a two-prong strategy: (1) we use a robust variational autoencoder model that is based on robust statistics specifically, beta-divergence which can learn from data with outliers; (2) we use a transfer-learning method for learning models across datasets with different characteristics. Our results on MRI datasets demonstrate that we can improve the accuracy of lesion detection by adapting robust statistical models and transfer learning for a Variational Auto-Encoder model.
|
|
MoAbPo-06 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Lung, Chest, and Airways Image Analysis I |
|
|
Chair: Soltanian-Zadeh, Hamid | University of Tehran |
Co-Chair: Christensen, Gary E. | The University of Iowa |
|
10:30-12:00, Paper MoAbPo-06.1 | Add to My Program |
Classification of Lung Nodules in Ct Volumes Using the Lung-Rads^tm Guidelines with Uncertainty Parameterization |
|
Ferreira, Carlos Alexandre | INESC TEC |
Aresta, Guilherme | INESC TEC/FEUP |
Pedrosa, João | INESC TEC |
Rebelo, João | Department of Radiology, São João Hospital |
Negrão, Eduardo | Department of Radiology, São João Hospital |
Cunha, António | Universidade De Trás-Os-Montes E Alto Douro & INESC Tecnologia E |
Ramos, Isabel | Faculty of Medicine, University of Porto |
Campilho, Aurélio | Universidade Do Porto, Instituto De Engenharia Biomédica |
Keywords: Computed tomography (CT), Classification, Lung
Abstract: Currently, lung cancer is the most lethal in the world. In order to make screening and follow-up a little more systematic, guidelines have been proposed. Therefore, this study aimed to create a diagnostic support approach by providing a patient label based on the LUNG-RADS^TM guidelines. The only input required by the system is the nodule centroid to take the region of interest for the input of the classification system. With this in mind, two deep learning networks were evaluated: a Wide Residual Network and a DenseNet. Taking into account the annotation uncertainty we proposed to use sample weights that are introduced in the loss function, allowing nodules with a high agreement in the annotation process to take a greater impact on the training error than its counterpart. The best result was achieved with the Wide Residual Network with sample weights achieving a nodule-wise LUNG-RADS^TM labelling accuracy of 0.735±0.003.
|
|
10:30-12:00, Paper MoAbPo-06.2 | Add to My Program |
Deep Feature Disentanglement Learning for Bone Suppression in Chest Radiographs |
|
Lin, Chunze | Tsinghua University |
Tang, Ruixiang | Tsinghua University |
Lin, Darryl | 12 Sigma Technologies |
Liu, Langechuan | 12Sigma.ai |
Lu, Jiwen | Tsinghua University |
Chen, Yunqiang | 12 Sigma Technologies |
Zhou, Jie | Tsinghua University |
Gao, Dashan | 12 Sigma Technologies |
Keywords: X-ray imaging, Machine learning
Abstract: Suppression of bony structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper, we propose a Disentanglement AutoEncoder (DAE) for bone suppression. As the projection of 3D structures of bones and soft tissues overlap in 2D radiographs, their features are interwoven and need to be disentangled for effective bone suppression. Our DAE progressively separates the features of soft-tissues from that of the bony structure during the encoder phase and reconstructs the soft-tissue image based on the disentangled features of soft-tissue. Bone segmentation can be performed concurrently using the separated bony features through a separate multi-task branch. By training the model with multi-task supervision, we explicitly encourage the autoencoder to pay more attention to the locations of bones in order to avoid loss of soft-tissue information. The proposed method is shown to be effective in suppressing bone structures from chest radiographs with very little visual artifacts.
|
|
10:30-12:00, Paper MoAbPo-06.3 | Add to My Program |
Automatic Bounding Box Annotation of Chest X-Ray Data for Localization of Abnormalities |
|
Wu, Joy Tzung-yu | IBM Research - Almaden |
Gur, Yaniv | IBM Almaden Research Center |
Karargyris, Alexandros | IBM |
Bin Syed, Ali | IBM Research |
Boyko, Orest | IBM Research |
Moradi, Mehdi | IBM Research |
Syeda-Mahmood, Tanveer | IBM Almaden Research Center |
Keywords: X-ray imaging, Lung, Machine learning
Abstract: Due to the increasing availability of public chest x-ray datasets over the last few years, automatic detection of findings and their locations in chest x-ray studies has become an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of finding textit{locations}. However, the process of marking findings in chest x-ray studies is both time consuming and costly as it needs to be performed by radiologists. To overcome this problem, weakly supervised approaches have been employed to depict finding locations as a byproduct of the classification task, but these approaches have not shown much promise so far. With this in mind, in this paper we propose an textit{automatic} approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically textit{standardized} to the upper, middle, and lower lung zones for the left and right lung, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image using standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach we were able to automatically annotate over 13,000 images in a matter of hours, and used this dataset to train an opacity detection model using RetinaNet to obtain results on a par with the state-of-the-art.
|
|
10:30-12:00, Paper MoAbPo-06.4 | Add to My Program |
Multimodal Fusion of Imaging and Genomics for Lung Cancer Recurrence Prediction |
|
Subramanian, Vaishnavi | University of Illinois at Urbana-Champaign |
Do, Minh | University of Illinois at Urbana-Champaign |
Syeda-Mahmood, Tanveer | IBM Almaden Research Center |
Keywords: Multi-modality fusion, Lung, Computed tomography (CT)
Abstract: Lung cancer has a high rate of recurrence in early-stage patients. Predicting the post-surgical recurrence in lung cancer patients has traditionally been approached using single modality information of genomics or radiology images. We investigate the potential of multimodal fusion for this task. By combining computed tomography (CT) images and genomics, we demonstrate improved prediction of recurrence using linear Cox proportional hazards models with elastic net regularization. We work on a recent non-small cell lung cancer (NSCLC) radiogenomics dataset of 130 patients and observe an increase in concordance-index values of up to 10%. Employing non-linear methods from the neural network literature, such as multi-layer perceptrons and visual-question answering fusion modules, did not improve performance consistently. This indicates the need for larger multimodal datasets and fusion techniques better adapted to this biological setting.
|
|
10:30-12:00, Paper MoAbPo-06.5 | Add to My Program |
AirwayNet-SE: A Simple-Yet-Effective Approach to Improve Airway Segmentation Using Context Scale Fusion |
|
Qin, Yulei | Shanghai Jiao Tong University |
Gu, Yun | Shanghai Jiao Tong University |
Zheng, Hao | Shanghai Jiao Tong University |
Chen, Mingjian | Institute of Image Processing and Pattern Recognition, Shanghai |
Yang, Jie | Shanghai Jiao Tong University |
Zhu, Yuemin | CNRS |
Keywords: Computed tomography (CT), Lung, Image segmentation
Abstract: Accurate segmentation of airways from chest CT scans is crucial for pulmonary disease diagnosis and surgical navigation. However, the intra-class variety of airways and their intrinsic tree-like structure pose challenges to the development of automatic segmentation methods. Previous work that exploits convolutional neural networks (CNNs) does not take context scales into consideration, leading to performance degradation on peripheral bronchiole. We propose the two-step AirwayNet-SE, a Simple-yet-Effective CNNs-based approach to improve airway segmentation. The first step is to adopt connectivity modeling to transform the binary segmentation task into 26-connectivity prediction task, facilitating the model’s comprehension of airway anatomy. The second step is to predict connectivity with a two-stage CNNs-based approach. In the first stage, a Deep-yet-Narrow Network (DNN) and a Shallow-yet-Wide Network (SWN) are respectively utilized to learn features with large-scale and small-scale context knowledge. These two features are fused in the second stage to predict each voxel's probability of being airway and its connectivity relationship between neighbors. We trained our model on 50 CT scans from public datasets and tested on another 20 scans. Compared with state-of-the-art airway segmentation methods, the robustness and superiority of the AirwayNet-SE confirmed the effectiveness of large-scale and small-scale context fusion. In addition, we released our manual airway annotations of 60 CT scans from public datasets for supervised airway segmentation study.
|
|
MoAbPo-07 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Heart Imaging and Analysis I |
|
|
Chair: Grau, Vicente | University of Oxford |
Co-Chair: Frangi, Alejandro | University of Sheffield |
|
10:30-12:00, Paper MoAbPo-07.1 | Add to My Program |
Low-Dose Cardiac-Gated Spect Via a Spatiotemporal Convolutional Neural Network |
|
Song, Chao | Illinois Institute of Technology |
Yang, Yongyi | Illinois Institute of Technology |
Wernick, Miles | Illinois Institute of Technology |
Pretorius, Hendrik | University of Massachusetts Medical School |
King, Michael A | University of Massachusetts Medical School |
Keywords: Nuclear imaging (e.g. PET, SPECT), Heart, image filtering (e.g. mathematical morphology, wavelets,...)
Abstract: In previous studies convolutional neural networks (CNN) have been demonstrated to be effective for suppressing the elevated imaging noise in low-dose single-photon emission computed tomography (SPECT). In this study, we investigate a spatiotemporal CNN model (ST-CNN) to exploit the signal redundancy in both spatial and temporal domains among the gate frames in a cardiac-gated sequence. In the experiments, we demonstrated the proposed ST-CNN model on a set of 119 clinical acquisitions with imaging dose reduced by four times. The quantitative results show that ST-CNN can lead to further improvement in the reconstructed myocardium in terms of the overall error level and the spatial resolution of the left ventricular (LV) wall. Compared to a spatial-only CNN, STCNN decreased the mean-squared-error of the reconstructed myocardium by 21.1% and the full-width at half-maximum of the LV wall by 5.3%.
|
|
10:30-12:00, Paper MoAbPo-07.2 | Add to My Program |
Deep Learning for Time Averaged Wall Shear Stress Prediction in Left Main Coronary Bifurcations |
|
Gharleghi, Ramtin | University of New South Wales |
Samarasinghe, Gihan | University of New South Wales |
Sowmya, Arcot | University of New South Wales |
Beier, Susann | University of New South Wales |
Keywords: Machine learning, Vessels, Computed tomography (CT)
Abstract: Analysing blood flow in coronary arteries has often been suggested in aiding the prediction of cardiovascular disease (CVD) risk. Blood flow induced hemodynamic indices can function as predictive measures in this pursuit and a fast method to calculate these may allow patient specific treatment considerations for improved clinical outcomes in the future. In vivo measurements of these metrics are not practical and thus computational fluid dynamic simulations (CFD) are widely used to investigate blood flow conditions, but require costly computation time for large scale studies such as patient specific considerations in patients screened for CVD. This paper proposes a deep learning approach to estimating the well established hemodynamic risk indicator time average wall shear stress (TAWSS) based on the vessel geometry. The model predicts TAWSS with good accuracy, achieving cross validation results of average Mean Absolute error of 0.0407Pa and standard deviation of 0.002Pa on a 127 patient CT angiography dataset, while being several orders of magnitude faster than computational simulations, using the vessel radii, angles between bifurcation (branching) vessels, curvature and other geometrical features. This bypasses costly computational simulations and allows large scale population studies as required for meaningful CVD risk prediction.
|
|
10:30-12:00, Paper MoAbPo-07.3 | Add to My Program |
MRI-Based Characterization of Left Ventricle Dyssynchrony with Correlation to CRT Outcomes |
|
Yang, Dong | Rutgers University |
Huang, Qiaoying | Rutgers University |
Al’Aref, Subhi | Weill Cornell Medicine |
Min, James | Weill Cornell Medical College |
Axel, Leon | NYU Medical Center |
Metaxas, Dimitris | Rutgers University |
Keywords: Heart, Magnetic resonance imaging (MRI), Modeling - Anatomical, physiological and pathological
Abstract: Cardiac resynchronization therapy (CRT) can improve cardiac function in some patients with heart failure (HF) and dyssynchrony. However, as many as half of patients selected for CRT by conventional criteria (HF and ECG QRS broadening greater than 150 ms, preferably with left bundle branch block (LBBB) pattern) do not benefit from it. LBBB leads to characteristic motion changes seen with echocardiography and magnetic resonance imaging (MRI). Attempts to use echocardiography to quantitatively characterize dyssynchrony have failed to improve prediction of response to CRT. We introduce a novel hybrid model-based and machine learning approach to characterize regional 3D cardiac motion in dyssynchrony from MRI, using deformable models and deep learning. First, 3D left ventricle (LV) models of the moving heart are constructed from multiple planes of cine MRI. Using the conventional 17-segment model (AHA), we capture the regional 3D motion of each segment of the LV wall. Then, a neural network is used to detect and classify abnormalities of cardiovascular regional motions. Using over 100 patient data, we show that different types of dyssynchrony can be accurately demonstrated in 3D+t space and their correlation to CRT response.
|
|
10:30-12:00, Paper MoAbPo-07.4 | Add to My Program |
Machine Learning and Graph Based Approach to Automatic Right Atrial Segmentation from Magnetic Resonance Imaging |
|
Regehr, Matthew | University of Alberta |
Volk, Andrew James Alexander | University of Alberta |
Noga, Michelle | University of Alberta |
Punithakumar, Kumaradevan | University of Alberta |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Machine learning
Abstract: Manual delineation of the right atrium throughout the cardiac cycle is tedious and time-consuming, yet promising for early detection of right heart dysfunction. In this study, we developed a fully automated approach to right atrial segmentation in 4-chamber long-axis magnetic resonance image (MRI) cine sequences by applying a U-Net based neural network approach followed by a contour reconstruction and refinement algorithm. In contrast to U-Net, the proposed approach performs segmentation using open contours. This allows for exclusion of the tricuspid valve region from the atrial segmentation, an essential aspect in the analysis of atrial wall motion. The MR images were retrospectively acquired from 242 cine sequences which were manually segmented by an expert radiologist to produce the ground truth data. The neural network was trained over 600 epochs under six different hyperparameter configurations on 202 randomly selected sequences to recognize a dilated region surrounding the right atrial contour. A graph algorithm is then applied to the binary labels predicted by the trained model to accurately reconstruct the corresponding contours. Finally, the contours are refined by combining a nonrigid registration algorithm which tracks the deformation of the heart and a Gaussian process regression. Evaluation of the proposed method on the remaining 40 MR image sequences excluding a single outlier sequence yielded promising Sorensen--Dice coefficients and Hausdorff distances of 95.2% and 4.64 mm respectively before refinement and 94.9% and 4.38 mm afterward.
|
|
10:30-12:00, Paper MoAbPo-07.5 | Add to My Program |
Automatic Extraction and Sign Determination of Respiratory Signal in Real-Time Cardiac Magnetic Resonance Imaging |
|
Chen, Chong | The Ohio State University |
Liu, Yingmin | Ohio State University |
Simonetti, Orlando | The Ohio State University |
Ahmad, Rizwan | Ohio State University |
Keywords: Heart, Magnetic resonance imaging (MRI), Image registration
Abstract: In real-time (RT) cardiac cine imaging, a stack of 2D slices is collected sequentially under free-breathing conditions. A complete heartbeat from each slice is then used for cardiac function quantification. The inter-slice respiratory mismatch can compromise accurate quantification of cardiac function. Methods based on principal components analysis (PCA) have been proposed to extract the respiratory signal from RT cardiac cine, but these methods cannot resolve the inter-slice sign ambiguity of the respiratory signal. In this work, we propose a fully automatic sign correction procedure based on the similarity of neighboring slices and correlation to the center-of-mass curve. The proposed method is evaluated in eleven volunteers, with ten slices per volunteer. The motion in a manually selected region-of-interest (ROI) is used as a reference. The results show that the extracted respiratory signal has a high, positive correlation with the reference in all cases. The qualitative assessment of images also shows that the proposed approach can accurately identify heartbeats, one from each slice, belonging to the same respiratory phase. This approach can improve cardiac function quantification for RT cine without manual intervention.
|
|
10:30-12:00, Paper MoAbPo-07.6 | Add to My Program |
Accelerated Phase Contrast Magnetic Resonance Imaging Via Deep Learning |
|
Nath, Ruponti | University of Louisville |
Callahan, Sean | University of Louisville |
Singam, Narayana | University of Louisville |
Stoddard, Marcus | University of Louisville |
Amini, Amir | University of Louisville |
Keywords: Magnetic resonance imaging (MRI), Heart, Computational Imaging
Abstract: In this paper, we propose a framework for accelerated reconstruction of 2D phase contrast magnetic resonance images from undersampled k-space domain by using deep learning methods. Undersampling in k-space violates Nyquist Sampling and creates artifacts in the image domain. In the proposed method, we consider the reconstruction problem as a de-aliasing problem in complex spatial domain. To test the proposed method, from fully sampled k-space data undersampling in k-space was performed in the phase-encode direction based on a probability density function which ensures maximum rate of sampling in low frequency regions. For the deep convolutional neural network (CNN) we chose the U-net architecture. The proposed CNN was trained and tested on 4D flow MRI data in 14 subjects with aortic stenosis. The reconstructed complex two channel image showed that the U-net is able to unaliase the undersampled flow images with resulting magnitude and phase difference images showing good agreement with the fully sampled magnitude and phase images. We show that the proposed method outperforms 2D compressed sensing approach of spatial total variation regularization method. Flow waveforms derived from reconstructed images closely follow flow waveforms derived from the original data. Moreover, the method is computationally fast. Each 2D magnitude and phase image is reconstructed within a second using a single GPU.
|
|
10:30-12:00, Paper MoAbPo-07.7 | Add to My Program |
A One-Shot Learning Framework for Assessment of Fibrillar Collagen from Second Harmonic Generation Images of an Infarcted Myocardium |
|
Liu, Qun | Louisiana State University |
Mukhopadhyay, Supratik | Louisiana State University |
Rodriguez, Maria Ximena Bastidas | Universidad Nacional De Colombia |
Fu, Xing | Louisiana State University |
Sahu, Sushant | Louisiana State University |
Burk, David | Louisiana State University |
Gartia, Manas Ranjan | University of Illinois Urbana Champaign |
Keywords: Microscopy - Multi-photon, Heart, Machine learning
Abstract: Myocardial infarction (MI) is a scientific term that refers to heart attack. In this study, we combine induction of highly specific second harmonic generation (SHG) signals from non-centrosymmetric macromolecules such as fibrillar collagens together with two-photon excited cellular autofluorescence in infarcted mouse heart to quantitatively probe fibrosis, especially targeted at an early stage after MI. We present robust one-shot machine learning algorithms that enable determination of spatially resolved 2D structural organization of collagen as well as structural morphologies in heart tissues post-MI with spectral specificity and sensitivity. Detection, evaluation, and precise quantification of fibrosis extent at early stage would guide one to develop treatment therapies that may prevent further progression and determine heart transplant needs for patient survival.
|
|
MoAbPo-08 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Image Enhancement, Denoising, Deconvolution |
|
|
|
10:30-12:00, Paper MoAbPo-08.1 | Add to My Program |
Using a Generative Adversarial Network for CT Normalization and Its Impact on Radiomic Features |
|
Wei, Leihao | University of California, Los Angeles |
Lin, Yannan | University of California, Los Angeles |
Hsu, William | University of California, Los Angeles |
Keywords: Computed tomography (CT), Image enhancement/restoration(noise and artifact reduction), Lung
Abstract: Computer-Aided-Diagnosis (CADx) systems assist radiologists with identifying and classifying potentially malignant pulmonary nodules on chest CT scans using morphology and texture-based (radiomic) features. However, radiomic features are sensitive to differences in acquisitions due to variations in dose levels and slice thickness. This study investigates the feasibility of generating a normalized scan from heterogeneous CT scans as input. We obtained projection data from 40 low-dose chest CT scans, simulating acquisitions at 10%, 25% and 50% dose and reconstructing the scans at 1.0mm and 2.0mm slice thickness. A 3D generative adversarial network (GAN) was used to simultaneously normalize reduced dose, thick slice (2.0mm) images to normal dose (100%), thinner slice (1.0mm) images. We evaluated the normalized image quality using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and Learned Perceptual Image Patch Similarity (LPIPS). Our GAN improved perceptual similarity by 35%, compared to a baseline CNN method. Our analysis also shows that the GAN-based approach led to a significantly smaller error (p-value < 0.05) in nine studied radiomic features. These results indicated that GANs could be used to normalize heterogeneous CT images and reduce the variability in radiomic feature values.
|
|
10:30-12:00, Paper MoAbPo-08.2 | Add to My Program |
Adversarial Normalization for Multi Domain Image Segmentation |
|
Pierre-Luc Delisle, Pierre-Luc Delisle | Ecole De Technologie Superieure |
Anctil-Robitaille, Benoit | École De Technologie Supérieure |
Desrosiers, Christian | École De Technologie Supérieure |
Lombaert, Herve | ETS Montreal |
Keywords: Magnetic resonance imaging (MRI), Image segmentation, Image enhancement/restoration(noise and artifact reduction)
Abstract: Image normalization is a critical step in medical imaging. This step is often done on a per-dataset basis, preventing current segmentation algorithms from the full potential of exploiting jointly normalized information across multiple datasets. To solve this problem, we propose an adversarial normalization approach for image segmentation which learns common normalizing functions across multiple datasets while retaining image realism. The adversarial training provides an optimal normalizer that improves both the segmentation accuracy and the discrimination of unrealistic normalizing functions. Our contribution therefore leverages common imaging information from multiple domains. The optimality of our common normalizer is evaluated by combining brain images from both infants and adults. Results on the challenging iSEG and MRBrainS datasets reveal the potential of our adversarial normalization approach for segmentation, with Dice improvements of up to 59.6% over the baseline.
|
|
10:30-12:00, Paper MoAbPo-08.3 | Add to My Program |
Blind Deconvolution of Fundamental and Harmonic Ultrasound Images |
|
Hourani, Mohamad | University of Toulouse, IRIT/INP-ENSEEIHT |
Basarab, Adrian | Université De Toulouse |
Michailovich, Oleg | University of Waterloo |
Matrone, Giulia | University of Pavia |
Ramalli, Alessandro | University of Florence |
Kouamé, Denis | Université De Toulouse III, IRIT UMR CNRS 5505 |
Tourneret, Jean-Yves | University of Toulouse |
Keywords: Image enhancement/restoration(noise and artifact reduction), Deconvolution, Ultrasound
Abstract: Restoring the tissue reflectivity function (TRF) from ultrasound (US) images is an extensively explored research field. It is well-known that human tissues and contrast agents have a non-linear behavior when interacting with US waves. In this work, we investigate this non-linearity and the interest of including harmonic US images in the TRF restoration process. Therefore, we introduce a new US image restoration method taking advantage of the fundamental and harmonic components of the observed radiofrequency (RF) image. The depth information contained in the fundamental component and the good resolution of the harmonic image are combined to create an image with better properties than the fundamental and harmonic images considered separately. Under the hypothesis of weak scattering, the RF image is modeled as the 2D convolution between the TRF and the system point spread function (PSF). An inverse problem is formulated based on this model able to jointly estimate the TRF and the PSF. The interest of the proposed blind deconvolution algorithm is shown through an in vivo result and compared to a conventional US restoration method.
|
|
10:30-12:00, Paper MoAbPo-08.4 | Add to My Program |
Bone Structures Extraction and Enhancement in Chest Radiographs Via CNN Trained on Synthetic Data |
|
Gozes, Ophir | Tel Aviv University |
Greenspan, Hayit K. | Tel Aviv University |
Keywords: Image enhancement/restoration(noise and artifact reduction), Machine learning, Image synthesis
Abstract: In this paper, we present a deep learning based image processing technique for extraction of bone structures in chest radiographs using a U-Net FCNN. The U-Net was trained to accomplish the task in a fully supervised setting. To create the training image pairs, we employed simulated X-Ray or Digitally Reconstructed Radiographs (DRR), derived from 664 CT scans belonging to the LIDC-IDRI dataset. Using HU based segmentation of bone structures in the CT domain,a synthetic 2D ”Bone x-ray” DRR is produced and used for training the network. For the reconstruction loss, we utilize two loss functions- L1 Loss and perceptual loss. Once the bone structures are extracted, the original image can be enhanced by fusing the original input x-ray and the synthesized “Bone X-ray”. We show that our enhancement technique is applicable to real x-ray data, and display our results on the NIH Chest X-Ray-14 dataset.
|
|
10:30-12:00, Paper MoAbPo-08.5 | Add to My Program |
Zero-Shot Medical Image Artifact Reduction |
|
Chen, Yu-Jen | National Tsing Hua University |
Chang, Yen-Jung | National Tsing Hua University |
Wen, Shao-Cheng | National Tsing Hua University |
Shi, Yiyu | University of Notre Dame |
Xu, Xiaowei | Guangdong General Hospital |
Ho, Tsung-Yi | National Tsing Hua University |
Jia, Qianjun | Guangdong General Hospital |
Huang, Meiping | Department of Catheterization Lab, Guangdong Cardiovascular Inst |
Zhuang, Jian | Department of Cardiac Surgery, Guangdong Cardiovascular Institut |
Keywords: Image enhancement/restoration(noise and artifact reduction), Machine learning
Abstract: Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc. However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern. As such, they have limited clinical adoption. In this paper, we introduce a "Zero-Shot" medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference. Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time. We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than the state-of-the-art both qualitatively and quantitatively, while using shorter test time. To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.
|
|
10:30-12:00, Paper MoAbPo-08.6 | Add to My Program |
Robust Algorithm for Denoising of Photon-Limited Dual-Energy Cone Beam CT Projections |
|
Zavala-Mondragon, Luis Albert | Eindhoven University of Technology |
van der Sommen, Fons | Eindhoven University of Technology |
Ruijters, Daniel | Philips Healthcare |
Steinhauser, Heidrun | Philips Healthcare |
Engel, Klaus Jurgen | Philips Electronics Netherlands |
de With, Peter | Eindhoven University of Technology |
Keywords: Image enhancement/restoration(noise and artifact reduction)
Abstract: Dual-Energy CT offers significant advantages over traditional CT imaging because it offers energy-based awareness of the image content and facilitates material discrimination in the projection domain. The Dual-Energy CT concept has intrinsic redundancy that can be used for improving image quality, by jointly exploiting the high- and low-energy projections. In this paper we focus on noise reduction. This work presents the novel noise-reduction algorithm Dual Energy Shifted Wavelet Denoising (DESWD), which renders high-quality Dual-Energy CBCT projections out of noisy ones. To do so, we first apply a Generalized Anscombe Transform, enabling us to use denoising methods proposed for Gaussian noise statistics. Second, we use a 3D transformation to denoise all the projections at once. Finally we exploit the inter-channel redundancy of the projections to create sparsity in the signal for better denoising with a channel-decorrelation step. Our simulation experiments show that DESWD performs better than a state-of-the-art denoising method (BM4D) in limited photon-count imaging, while BM4D achieves excellent results for less noisy conditions.
|
|
10:30-12:00, Paper MoAbPo-08.7 | Add to My Program |
Deblurring Cataract Surgery Videos Using a Multi-Scale Deconvolutional Neural Network |
|
Ghamsarian, Negin | Klagenfurt University |
Taschwer, Mario | Klagenfurt University |
Schoeffmann, Klaus | Klagenfurt University |
Keywords: Image enhancement/restoration(noise and artifact reduction), Eye, Machine learning
Abstract: A common quality impairment observed in surgery videos is blur, caused by object motion or a defocused camera. Degraded image quality hampers the progress of machine-learning-based approaches in learning and recognizing semantic information in surgical video frames like instruments, phases, and surgical actions. This problem can be mitigated by automatically deblurring video frames as a preprocessing method for any subsequent video analysis task. In this paper, we propose and evaluate a multi-scale deconvolutional neural network to deblur cataract surgery videos. Experimental results confirm the effectiveness of the proposed approach in terms of the visual quality of frames as well as PSNR improvement.
|
|
MoAbPo-09 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Tracking and Motion Estimation in Microscopy |
|
|
Chair: Munoz-Barrutia, Arrate | Universidad Carlos III De Madrid |
Co-Chair: Kozubek, Michal | Masaryk University |
|
10:30-12:00, Paper MoAbPo-09.1 | Add to My Program |
In Silico Prediction of Cell Traction Forces |
|
Pielawski, Nicolas | Uppsala University |
Hu, Jianjiang | Karolinska Institutet |
Strömblad, Staffan | Karolinska Institutet |
Wählby, Carolina | Centre for Image Analysis and Science for Life Laboratory, Uppsa |
Keywords: Machine learning, Probabilistic and statistical models & methods, Microscopy - Light, Confocal, Fluorescence
Abstract: Traction Force Microscopy (TFM) is a technique used to determine the tensions that a biological cell conveys to the underlying surface. Typically, TFM requires culturing cells on gels with fluorescent beads, followed by bead displacement calculations. We present a new method allowing to predict those forces from a regular fluorescent image of the cell. Using Deep Learning, we trained a Bayesian Neural Network adapted for pixel regression of the forces and show that it generalises on different cells of the same strain. The predicted forces are computed along with an approximated uncertainty, which shows whether the prediction is trustworthy or not. Using the proposed method could help estimating forces when calculating non-trivial bead displacements and can also free one of the fluorescent channels of the microscope. Code is available at https://github.com/wahlby-lab/InSilicoTFM.
|
|
10:30-12:00, Paper MoAbPo-09.2 | Add to My Program |
Optimizing Particle Detection by Colocalization Analysis in Multi-Channel Fluorescence Microscopy Images |
|
Ritter, Christian | University of Heidelberg, DKFZ Heidelberg |
Newrly, Anne | University of Heidelberg |
Schifferdecker, Sandra | University of Heidelberg |
Roggenbach, Imme | University of Heidelberg |
Müller, Barbara | University of Heidelberg |
Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
Keywords: Microscopy - Light, Confocal, Fluorescence, Single cell & molecule detection, Optimization method
Abstract: Automatic detection of virus particles displayed as small spots in fluorescence microscopy images is an important task to elucidate infection processes. Particles are typically labeled with multiple fluorophores to acquire multi-channel images. We propose a new weakly supervised approach for automatic particle detection in the lower SNR channel of two-channel fluorescence microscopy data. A main advantage is that labeled data is not required. Instead of using labeled data, colocalization in different channels is exploited as a surrogate for ground truth using a novel measure. Our approach has been evaluated using synthetic as well as challenging live cell microscopy images of human immunodeficiency virus type 1 particles. We found, that our approach yields comparable results to a state-of-the-art supervised method, and can cope with defective fluorescent labeling as well as chromatic aberration of the microscope.
|
|
10:30-12:00, Paper MoAbPo-09.3 | Add to My Program |
Experimentally-Generated Ground Truth for Detecting Cell Types in an Image-Based Immunotherapy Screen |
|
Boyd, Joseph | MINES Paristech |
Gouveia, Zelia | Institut Curie |
Perez, Franck | Institut Curie |
Walter, Thomas | Institut Curie, Mines ParisTech |
Keywords: Machine learning, Microscopy - Light, Confocal, Fluorescence
Abstract: Chimeric antigen receptor is an immunotherapy whereby T lymphocytes are engineered to selectively attack cancer cells. Image-based screens of CAR-T cells, combining phase contrast and fluorescence microscopy, suffer from the gradual quenching of the fluorescent signal, making the reliable monitoring of cell populations across time-lapse imagery difficult. We propose to leverage the available fluorescent markers as an experimentally-generated ground truth, without recourse to manual annotation. With some simple image processing, we are able to segment and assign cell type classes automatically. This ground truth is sufficient to train a neural object detection system from the phase contrast signal alone, potentially eliminating the need for the cumbersome fluorescent markers. This approach will underpin the development of cheap and robust microscope-based protocols to quantify CAR-T activity against tumor cells in vitro.
|
|
10:30-12:00, Paper MoAbPo-09.4 | Add to My Program |
Short Trajectory Segmentation with 1D Unet Framework: Application to Secretory Vesicle Dynamics |
|
Dmitrieva, Mariia | University of Oxford |
Lefebvre, Joël | University of Oxford |
Delas Penas, Kristofer | University of Oxford |
Zenner, Helen | University of Cambridge |
Richens, Jennifer | University of Cambridge |
St. Johnston, Daniel | University of Cambridge |
Rittscher, Jens | University of Oxford |
Keywords: Machine learning, Microscopy - Light, Confocal, Fluorescence, Quantification and estimation
Abstract: The study of protein transport in living cell requires automated techniques to capture and quantify dynamics of the protein packaged into secretory vesicles. The movement of the vesicles is not consistent along the trajectory, therefore the quantitative study of their dynamics requires trajectories segmentation. This paper explores quantification of such vesicle dynamics and introduces a novel 1D U-Net based trajectory segmentation. Unlike existing mean squared displacement based methods, our proposed framework is not restricted under the requirement of long trajectories for effective segmentation. Moreover, as our approach provides segmentation within each sliding window, it enables effectively capture even short segments. The approach is quantified by the data acquired from spinning disk microscopy imaging of protein trafficking in Drosophila epithelial cells. The extracted trajectories have lengths ranging from 5 (short tracks) to 135 (long tracks) points. The proposed approach achieves 77.7% accuracy for the trajectory segmentation.
|
|
MoAbPo-10 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Abstract Posters: Brain Connectivity and Functional Imaging |
|
|
|
10:30-12:00, Paper MoAbPo-10.1 | Add to My Program |
Role of Broca’s Area in Receptive Language Processing |
|
Mostame, Parham | University of Illinois at Urbana-Champaign |
Sadaghiani, Sepideh | University of Illinois at Urbana-Champaign |
Babajani-Feremi, Abbas | The University of Tennessee Health Science Center |
Keywords: Connectivity analysis, EEG & MEG
Abstract: It is shown that classic expressive language areas are
involved in passive sentence-level speech processing.
However, this interaction between expressive and receptive
language pathways during passive language perception might
only exist because of high degree of cognitive demand which
modulates large integrated brain networks such as memory.
To answer whether such interaction exists only due to the
task-evoked increase of brain networks integration, or the
nature of language processing itself, we studied
modulations of power and functional connectivity (FC) in
Broca’s area during a passive word recognition task (WRT)
using electrocorticographic (ECoG) recordings. Our results
revealed no power modulation over Broca’s area during the
WRT, but we found FC modulation in this area with
frequency-specific spatiotemporal profiles. We conclude
that Broca’s area is involved in passive single word
processing, implying the integration of expressive and
receptive language pathways even at the word-level language
processing.
|
|
10:30-12:00, Paper MoAbPo-10.3 | Add to My Program |
Localization of the Epileptogenic Zone Using Virtual Resection of Magnetoencephalography (meg)-Based Brain Networks |
|
Pourmotabbed, Haatef | University of Memphis |
Wheless, James | University of Tennessee Health Science Center |
Babajani-Feremi, Abbas | The University of Tennessee Health Science Center |
Keywords: Connectivity analysis, EEG & MEG, Brain
Abstract: About two-thirds of patients with drug-resistant epilepsy
(DRE) achieve seizure freedom after resection of the
epileptogenic zone (EZ). Functional connectivity (FC)
analysis may be valuable for increasing the success rate. A
spectral graph measure based on virtual resection called
control centrality (CoC) has been used with ictal
electrocorticography to predict postoperative seizure
outcome. Our study investigated whether CoC can be used
with resting-state magnetoencephalography (rs-MEG) to
localize the EZ. The performance of CoC was compared to
that of another spectral measure called eigenvector
centrality (EVC). EVC had greater sensitivity while CoC had
greater specificity in localizing the EZ. This suggests
that these measures are complementary and may be valuable
for pre-surgical evaluations.
|
|
10:30-12:00, Paper MoAbPo-10.4 | Add to My Program |
Alzheimer's Disease Prediction Using Deep Learning Based on Functional and Structural MRI |
|
Hojjati, Seyd Hani | Department of Pediatrics, University of Tennessee Health Science |
Amin-Naji, Mostafa | Babol Noshirvani University of Technology |
Babajani-Feremi, Abbas | The University of Tennessee Health Science Center |
Keywords: Classification, Functional imaging (e.g. fMRI), Magnetic resonance imaging (MRI)
Abstract: Alzheimer's disease (AD) is the most common form of
dementia in elderly patients. AD has a significant impact
on patients, families and the public health system. In this
study, we aimed at identifying patients with mild cognitive
impairment (MCI) who progress to AD, MCI converter (MCI-C),
patients with MCI who do not progress to AD, MCI
non-converter (MCI-NC), patients with AD, and healthy
controls (HC). We proposed a convolutional neural network
(CNN) based on various features extracted from
resting-state functional MRI and structural MRI data to
classify four groups of subjects and achieved an accuracy
of 69.3%. Our results demonstrated the potential of CNN
based on integration of the functional and structural MRI
for identification of the early stage of AD.
|
|
10:30-12:00, Paper MoAbPo-10.5 | Add to My Program |
Individualized Prediction across Brain Networks |
|
Hassanzadeh, Reihaneh | Georgia State University |
Calhoun, Vince D | Tri-Institutional Center for Translational Research in Neuroimag |
Keywords: Machine learning, Functional imaging (e.g. fMRI), Pattern recognition and classification
Abstract: Resting-state brain functional networks have been recently
used extensively for prediction of different tasks such as
age and sex. The mainstream approach for such models has
been to compute the correlation between time series derived
from blood oxygen levels, for all pairs of networks, and
subsequently selecting some of them using a feature
reduction technique. As such, a strong feature comprises a
correlation computed for a pair of networks that are
strongly linked to the target of interest. This suggests
that the relationship between different networks can assist
in prediction of different clinical and demographic end
points. In this study, we investigate the more general and
critical question of how different pairs of networks can
aid characterizing individual subjects, rather than the end
points of interest, using ICA spatial maps. Furthermore, we
scrutinize multiple different pairs of networks to identify
the most informative domains that can help discriminating
subjects from each other, using novel deep learning-based
techniques.
|
|
10:30-12:00, Paper MoAbPo-10.6 | Add to My Program |
A Unified Approach to Study the Brain Connectivity Frequency Profile |
|
Faghiri, Ashkan | Georgia Institute of Technology |
DeRamus, Thomas | Tri-Institutional Center for Translational Research in Neuroimag |
Agcaoglu, Oktay | Tri-Institutional Center for Translational Research in Neuroimag |
Calhoun, Vince D | Tri-Institutional Center for Translational Research in Neuroimag |
Keywords: Connectivity analysis, Functional imaging (e.g. fMRI), Quantification and estimation
Abstract: Functional network connectivity (FNC) has been the focus of
many neuroimaging studies recently. Approaches to estimate
FNC either assume FNC is static or dynamic, with these two
assumptions considered opposing views. This approach to
connectivity estimation does not allow us to see the whole
picture, and no information about the inter-relationship is
directly provided. Here, we propose a novel approach to FNC
analyses which enable us to estimate connectivity across
any desired spectrum in one unifying approach (where the
spectrum includes both static and dynamic estimations of
connectivity). This approach can also explore the frequency
profile of the activity time series which facilitate these
connectivity patterns. We utilized this novel approach to
analyze a resting state functional magnetic resonance
imaging (rsfMRI) dataset. Our results point to a rich
interplay between activities from different frequency bands
and their corresponding connectivity patterns from
different frequency bands.
|
|
10:30-12:00, Paper MoAbPo-10.7 | Add to My Program |
Interpretable Multi-View Deep Networks Reveal Brain Cognitive Imaging-Genetic Associations |
|
Hu, Wenxing | Biomedical Engineering Department, Tulane University |
Calhoun, Vince D | Tri-Institutional Center for Translational Research in Neuroimag |
Wang, Yu-Ping | Tulane University |
Keywords: Multi-modality fusion, Functional imaging (e.g. fMRI), Brain
Abstract: Brain functional connectivity (FC) depicts the functional
relations between different brain regions. Neuroimaging
based research has emerged to study brain FC. As brain
dysfunctions are genetic inheritable, imaging-genetic
integration may help uncover hidden biological mechanisms.
The integration of imaging and genetic data, however, is
challenging due to the high dimensionality. Moreover, deep
networks are composed of a large number of layers and each
layer contains complex nonlinear operations, resulting in
difficulties in result explanation and biological-mechanism
analysis. In this work, we propose an interpretable convolutional
network model to address the challenges. Our model,
gradient class activation mapping guided convolutional
collaborative learning (gCAM-CCL), integrates
imaging-genetic data using interpretable deep networks. To
make the model interpretable, gCAM-CCL incorporates
Grad-CAM and guided Grad-CAM approaches into two
convolutional neural networks, later fused using a
collaborative layer. The proposed model, gCAM-CCL, can
generate activation/contribution maps of the input
images/genes using guided-backpropagation and
gradient-weighted feature-map combination. GCAM-CCL
calculates the weights of feature maps using the gradient
of class-label w.r.t. each feature map, and uses global
average pooling to merge the gradients so as to combine
feature maps. As a result, gCAM-CCL not only can obtain
discriminative brain regions and genes but also can
generate class-specific results, which further promotes
biological mechanism analysis. When applied to the Philadelphia Neurodevelopmental Cohort
(PNC), our model shows that low cognitive subjects and high
cognitive subjects exhibit different FCs. High cognitive
subjects tend to have a small number of dominant FCs
connections which activates gCAM-CCL's attention, while low
cognitive subjects tend to have a large number of activated
FCs. Further analysis on the identified brain FCs shows
that lingual gyrus is a significant hub for high cognitive
subjects; and gene enrichment analysis on the identified
genes shows that pathway "Regulation of neurotransmitter
levels" is related to high cognitive ability while low
cognitive subjects may have problem in pathway "Midbrain
development" and "Growth cone".
|
|
10:30-12:00, Paper MoAbPo-10.8 | Add to My Program |
Characterization of Resting Microvascular Dynamics in Skeletal Muscle Using Synchrosqueezing Transform of BOLD MRI and NIRS |
|
Yao, Jingting | Emory University |
Cowdrick, Kyle | Georgia Institute of Technology |
Brothers, Rowan | Emory University |
Anjum, Muhammad Ali Raza | Radiology and Imaging Sciences, Emory University |
Sprick, Justin | Emory University School of Medicine |
Singer, Adam | Emory University Hospital |
Brummer, Marijn | Emory University |
Park, Jeanie | Emory |
Risk, Benjamin | Emory University Rollins School of Public Health |
Buckley, Erin | Georgia Institute of Technology/Emory University |
Reiter, David | Emory University |
Keywords: Magnetic resonance imaging (MRI), Muscle, fMRI analysis
Abstract: Spatial and temporal regulation of microvascular blood flow impact delivery of oxygen and nutrients to skeletal muscle. We examine temporal patterns in blood oxygenation leveldependent MRI from the calf muscle at rest in healthy subjects and compare with simultaneously acquired optical spectroscopy using a custom near-infrared system. Colocalized time series data are characterized using wavelet synchrosqueezing transform, and preliminary analysis shows inter-subject variability in endothelial functions as well as comparable energy distribution between modalities. This approach holds potential for detailed mapping of microvascular impairment.
|
|
MoAbPo-11 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Abstract Posters: Clinical Applications and Biomedical Modeling |
|
|
|
10:30-12:00, Paper MoAbPo-11.1 | Add to My Program |
Identifying Hard-Tissue Conditions from Dental Images Using Convolutional Neural Network |
|
Sun, Qing | University of Pennsylvania |
Huang, Zhi | Purdue University |
Kim, Mansu | University of Pennsylvania |
Hara, Anderson | Indiana University |
Maupome, Gerardo | Indiana University |
Shen, Li | University of Pennsylvania |
Keywords: Tooth, Classification, Computer-aided detection and diagnosis (CAD)
Abstract: Despite the enormous success of deep learning in various biomedical domains, its applications to dental hard tissue conditions are underexplored, in particular for analyzing photographic dental images. To bridge this gap, we propose a deep convolutional neural network framework to identify dental hard-tissue conditions from photographic tooth images, and show its superior performance over a few popular learning models.
|
|
10:30-12:00, Paper MoAbPo-11.2 | Add to My Program |
Evaluation and Correction of Effect of User Input on Accuracy of Smartphone-Based Flat Head Syndrome Measurements |
|
Aalamifar, Fereshteh | PediaMetrix |
Hezaveh, Seyed Hossein | PediaMetrix |
Linguraru, Marius George | Children's National Health System |
Seifabadi, Reza | Johns Hopkins University |
Keywords: Computer-aided detection and diagnosis (CAD), Image acquisition, Quantification and estimation
Abstract: We report the effect of user input on the accuracy of the
quantitative imaging algorithms that enable detection of
infant flat head syndrome from a photo acquired with a
smartphone. These algorithms that evaluate cranial shape
are affected by variations in camera angle and distance
relative to the head when images are acquired by novice
users. Our study demonstrates that cranial shape analysis
can be reliably performed with data from novice users.
|
|
10:30-12:00, Paper MoAbPo-11.3 | Add to My Program |
Two-Year Relapse-Free Survival Prediction of NSCLC Patients on Chest CT Images Using Tumor and Peritumoral Features |
|
Lee, Soomin | Seoul Women's University |
Jung, Julip | Seoul Women's University |
Hong, Helen | Seoul Women's University |
Kim, Bongseog | Veterans Health Service Medical Center |
Keywords: Computed tomography (CT), Lung, Pattern recognition and classification
Abstract: Tumor spread through lung parenchyma is an important pattern of invasion and this study evaluates whether CT radiomic features of peritumoral tissue in NSCLC patients can improve the performance of 2-year relapse-free survival prediction. Radiomic features are extracted from tumor and peritumoral regions, significant features are selected and classified into recurrence and non-recurrence patient groups, and survival probability is estimated using Kaplan-meier curves. As a result, peritumoral features that account for invasion patterns around lung tumors on CT images improved the performance of 2-year relapse-free survival prediction.
|
|
10:30-12:00, Paper MoAbPo-11.4 | Add to My Program |
Dgnet: Diagnosis Generation Network from Medical Image |
|
Wu, Fan | Sun Yat-Sen University |
Peng, Linlin | Dalian Dermatosis Hospital, Dalian, P. R. China |
Lian, Zongkai | Sun Yat-Sen University |
Li, Mingxin | Dalian Dermatosis Hospital, Dalian, P. R. China |
Jiang, Shancheng | Sun Yat-Sen University |
Keywords: Histopathology imaging (e.g. whole slide imaging), Computer-aided detection and diagnosis (CAD), Skin
Abstract: Histopathological examination of skin lesions is considered as gold standard for correct diagnosis of skin disease, especially for manifold types of skin cancers. Limited by scarce histopathological image sets, inconspicuous patterns between different appearances of histopathological features, and weak predictive power of existing models, few researches focus on computer-aided diagnosis of skin diseases based on histopathological images. Although the rapid development of deep learning technologies has shown remarkable advantages over traditional methods on medical images retrieval and mining, it remains the inability to interpret the prediction in visually and semantically meaningful ways. Motivated by above analysis, we put forward an attention-based model to automatically generate diagnostic reports from raw histopathological examination images, and meanwhile providing final diagnostic result and visualize attention for justifications of the model diagnosis process. Our model includes image model, language model and a separate attention module. The image model is proposed to extract multi-scale feature maps. The language model, aims to read and explore the discriminative feature maps, extracted by image model, to learn a direct mapping from caption words to image pixels. We propose an improved, trainable attention module, separated from language model and make the captions data exposed to language model, meanwhile, we apply a week-touched method in the connection of attention module and language model. In our experiments, we conduct the model training, model validating, and model testing using a dataset of 1200 histopathological images consisting of 11 different skin diseases. These histopathological images and related diagnostic reports were collected in collaboration with a number of pathologists during the past ten years. As the results show, our approach performs a more superior data-fitting ability and faster convergence rate compared with soft attention model. Furthermore, according to the comparison of evaluation scores , our model is indicative of better language understanding.
|
|
10:30-12:00, Paper MoAbPo-11.6 | Add to My Program |
Modeling Mucociliary Transport in Porcine Airways |
|
Stewart, Carley | University of Iowa |
Hilkin, Brieanna | University of Iowa |
Gansemer, Nick | University of Iowa |
Abou Alaiwa, Mahmoud | University of Iowa |
Keywords: Lung, Shape analysis, Computed tomography (CT)
Abstract: Mucociliary transport (MCT) is an innate host defense
mechanism of the airways. Any inhaled pathogens or
particulate matters are trapped by the mucus and swept out
of the airways by the action of cilia. Failure of this
defense is implicated in many airway diseases including
cystic fibrosis (CF). CF is a genetic disease caused by
mutation in the cystic fibrosis transmembrane conductance
regulator (CFTR) gene, which encodes an anion channel
permeable to both chloride and bicarbonate anions.
Previously, we showed in a pig animal model of CF that
tracheas are abnormally shaped and reduced in caliber when
compared to non-CF. This data was consistent with
tracheomalacia and dynamic collapse of the trachea seen on
bronchoscopic evaluation of the airways of children with
CF. While both mucus and functioning cilia are essential
for effective MCT, it is not clear whether the shape of the
airways has any role. Here, we measured MCT in vivo and
built a statistical shape model of both non-CF and CF pig
airways. This analysis will allow us to map the spatial
distribution of MCT within non-CF and CF airways. This
analysis will also allow us to correlate the shape features
of the airways to measures of MCT and allow to determine
whether the shape of the airways has any impact on
effective MCT.
|
|
MoPaO1 Oral Session, Oakdale I-II |
Add to My Program |
MRI Reconstruction Methods |
|
|
Chair: Haldar, Justin | University of Southern California |
Co-Chair: Fessler, Jeff | Univ. Michigan |
|
14:30-14:45, Paper MoPaO1.1 | Add to My Program |
Substituting Gadolinium in Brain MRI Using DeepContrast |
|
Sun, Haoran | Columbia University |
Liu, Xueqing | Columbia University |
Feng, Xinyang | Columbia University |
Liu, Chen | Columbia University |
Zhu, Nanyan | Columbia University |
Gjerswold-Selleck, Sabrina Josefina | Columbia University |
Wei, Hong-Jian | Columbia University |
Upadhyayula, Pavan Shankar | Columbia University |
Mela, Angeliki | Columbia University |
Wu, Cheng-Chia | Columbia University |
Canoll, Peter D. | Columbia University |
Laine, Andrew F. | Columbia University |
Vaughan, John Thomas | Columbia University |
Small, Scott | Columbia University Medical Center |
Guo, Jia | Columbia University |
Keywords: Magnetic resonance imaging (MRI), Brain, Machine learning
Abstract: Cerebral blood volume (CBV) is a hemodynamic correlate of oxygen metabolism and reflects brain activity and function. High-resolution CBV maps can be generated using the steady-state gadolinium-enhanced MRI technique. Such technique requires an intravenous injection of exogenous gadolinium based contrast agent (GBCA) and recent studies suggest that the GBCA can accumulate in the brain after frequent use. We hypothesize that endogenous sources of contrast might exist within the most conventional and commonly acquired structural MRI, potentially obviating the need for exogenous contrast. Here, we test this hypothesis by developing and optimizing a deep learning algorithm, which we call DeepContrast, in mice. We find that DeepContrast performs equally well as exogenous GBCA in mapping CBV of the normal brain tissue and enhancing glioblastoma. Together, these studies validate our hypothesis that a deep learning approach can potentially replace the need for GBCAs in brain MRI.
|
|
14:45-15:00, Paper MoPaO1.2 | Add to My Program |
Model-Based Deep Learning for Reconstruction of Joint K-Q Under-Sampled High Resolution Diffusion MRI |
|
Mani, Merry | University of Iowa |
Aggarwal, Hemant Kumar | University of Iowa |
Ghosh, Sanjay | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Image reconstruction - analytical & iterative methods, Machine learning, Image acquisition
Abstract: We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high-resolution imaging. The proposed reconstruction jointly recovers all the diffusion-weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using a convolutional autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction that unrolls the iterations similar to the recently proposed MoDL framework. Specifically, we show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
|
|
15:00-15:15, Paper MoPaO1.3 | Add to My Program |
Deep Learning Fast MRI Using Channel Attention in Magnitude Domain |
|
Lee, Joonhyung | KAIST |
Kim, Hyunjong | KAIST |
Chung, HyungJin | KAIST |
Ye, Jong Chul | Korea Advanced Inst of Science & Tech |
Keywords: Magnetic resonance imaging (MRI), Machine learning, Compressive sensing & sampling
Abstract: Magnetic resonance imaging (MRI) acquisition is an inherently slow process whose acceleration has been the subject of much investigation. In recent years, the explosive advance of deep learning techniques for computer vision and image reconstruction has led to the investigation of deep neural networks for the reconstruction of MRI with under-sampled k-space. In this work, we propose a new image domain architecture that directly produces a sum-of-squares image from under-sampled multi-coil MRI acquisition. This model, called BarbellNet, is a fully convolutional neural network architecture that utilizes the channel attention mechanism using the residual channel attention block (RCAB). Through extensive experiments with the fastMRI data set, we confirm the efficacy of BarbellNet.
|
|
15:15-15:30, Paper MoPaO1.4 | Add to My Program |
Self-Supervised Physics-Based Deep Learning MRI Reconstruction without Fully-Sampled Data |
|
Yaman, Burhaneddin | University of Minnesota |
Hosseini, Seyed Amir Hossein | University of Minnesota |
Moeller, Steen | University of Minnesota |
Ellermann, Jutta | University of Minnesota |
Ugurbil, Kamil | University of Minnesota |
Akcakaya, Mehmet | University of Minnesota |
Keywords: Machine learning, Magnetic resonance imaging (MRI), Computational Imaging
Abstract: Deep learning (DL) has emerged as a tool for improving accelerated MRI reconstruction. A common strategy among DL methods is the physics-based approach, where a regularized iterative algorithm alternating between data consistency and a regularizer is unrolled for a finite number of iterations. This unrolled network is then trained end-to-end in a supervised manner, using fully-sampled data as ground truth for the network output. However, in a number of scenarios, it is difficult to obtain fully-sampled datasets, due to physiological constraints such as organ motion or physical constraints such as signal decay. In this work, we tackle this issue and propose a self-supervised learning strategy that enables physics-based DL reconstruction without fully-sampled data. Our approach is to divide the acquired sub-sampled points for each scan into two sets, one of which is used to enforce data consistency in the unrolled network and the other to define the loss for training. Results show that the proposed self-supervised learning method successfully reconstructs images without fully-sampled data, performing similarly to the supervised approach that is trained with fully-sampled references. This has implications for physics-based inverse problem approaches for other settings, where fully-sampled data is not available or possible to acquire.
|
|
15:30-15:45, Paper MoPaO1.5 | Add to My Program |
Joint Optimization of Sampling Pattern and Priors in Model-Based Deep Learning |
|
Aggarwal, Hemant Kumar | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Image reconstruction - analytical & iterative methods, Machine learning, Magnetic resonance imaging (MRI)
Abstract: Deep learning methods are emerging as powerful alternatives for compressed sensing MRI to recover images from highly undersampled data. Unlike compressed sensing, the image redundancies that are captured by these models are not well understood. The lack of theoretical understanding also makes it challenging to choose the sampling pattern that would yield the best possible recovery. To overcome these challenges, we propose to optimize the sampling patterns and the parameters of the reconstruction block in a model-based deep learning framework. We show that the joint optimization by the model-based strategy results in improved performance than direct inversion CNN schemes due to better decoupling of the effect of sampling and image properties. The quantitative and qualitative results confirm the benefits of joint optimization by the model-based scheme over the direct inversion strategy.
|
|
15:45-16:00, Paper MoPaO1.6 | Add to My Program |
Adaptive Locally Low Rank and Sparsity Constrained Reconstruction for Accelerated Dynamic Mri |
|
Kafali, Sevgi Gokce | University of California Los Angeles, Los Angeles, CA |
Shih, Shu-Fu | University of California, Los Angeles |
Ruan, Dan | University of California Los Angeles |
Wu, Holden | University of California, Los Angeles |
Keywords: Magnetic resonance imaging (MRI), Image reconstruction - analytical & iterative methods, Image enhancement/restoration(noise and artifact reduction)
Abstract: Globally low rank and sparsity (GLRS) constrained techniques perform well under high acceleration factors, but may blur spatial details. Locally low rank and sparsity (LLRS) constrained techniques preserve the spatial details better, but are sensitive to the size of the local patches. We propose a novel adaptive locally low rank and sparsity (ALLRS) constrained reconstruction for accelerated dynamic MRI that preserves the spatial details in heterogeneous regions, and smooths preferentially in homogeneous regions by adapting the local patch size to the level of spatial details. Results from in vivo dynamic cardiac and liver MRI demonstrate that ALLRS achieves improved sharpness as well as peak signal-to-noise ratio and visual information fidelity index with suppressed under-sampling artifacts for up to 16-fold undersampling.
|
|
MoPaO2 Oral Session, Oakdale III |
Add to My Program |
Segmentation Applications and Methods II |
|
|
Chair: Santos, Andres | Universidad Politécnica De Madrid |
Co-Chair: Srinivasa, Gowri | PES Institute of Technology, Bangalore South Campus |
|
14:30-14:45, Paper MoPaO2.1 | Add to My Program |
A 3d Cnn with a Learnable Adaptive Shape Prior for Accurate Segmentation of Bladder Wall Using Mr Images |
|
Hammouda, Kamal | Bioengineering Department, University of Louisville |
Khalifa, Fahmi | University of Louisville |
Soliman, Ahmed | University of Louisville |
Abdeltawab, Hisham | Bioengineering Department, University of Louisville |
Ghazal, Mohammed | Abu Dhabi University |
Abou El-Ghar, Mohamed | University of Mansoura |
Haddad, Ahmed | University of Louisville |
Darwish, Hannan | Mansoura University |
Keynton, Robert | Bioengineering Department, University of Louisville |
El-baz, Ayman | University of Louisville |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Abdomen
Abstract: A 3D deep learning-based convolution neural network (CNN)is developed for accurate segmentation of pathological bladder(both wall border and pathology) using T2-weighted magnetic resonance imaging (T2W-MRI). Our system starts with a preprocessing step for data normalization to a unique space and extraction of a region-of-interest (ROI). The major stage utilizes a 3D CNN for pathological bladder segmentation, which contains a network, called CNN1, aims to segment the bladder wall (BW) with pathology. However, due to the similar visual appearance of BW and pathology, the CNN1 can not separate them. Thus, we developed another network (CNN2) with an additional pathway to extract BW only. The second pathway in CNN2 is fed with a 3Dlearnable adaptive shape prior model. To remove noisy and scattered predictions, the networks’ soft outputs are refined using a fully connected conditional random field. Our framework achieved accurate segmentation results for the BW and tumor as documented by the Dice similarity coefficient and Hausdorff distance. Moreover, comparative results against the other segmentation approach documented the superiority of our framework to provide accurate results for pathological BW segmentation.
|
|
14:45-15:00, Paper MoPaO2.2 | Add to My Program |
Center-Sensitive and Boundary-Aware Tooth Instance Segmentation and Classification from Cone-Beam CT |
|
Wu, Xiyi | Shanghai Jiao Tong University |
Chen, Huai | Shanghai Jiao Tong University |
Huang, Yi-Jie | Shanghai Jiao Tong University |
Guo, Hua Yan | Shanghai East Hospital Affiliated to Tongji University |
Qiu, Tian Tian | Entrusted Dental Clinic |
Wang, Lisheng | Shanghai Jiao Tong University |
Keywords: Tooth, Image segmentation, Computed tomography (CT)
Abstract: Tooth instance segmentation provides important assistance for computer-aided orthodontic treatment. Many previous studies on this issue have limited performance on distinguishing adjacent teeth and obtaining accurate tooth boundaries. To address the challenging task, in this paper, we present a novel method achieving tooth instance segmentation and classification from cone beam CT (CBCT) images. The core of our method is a two-level hierarchical deep neural network. We first embed center-sensitive mechanism with global stage heatmap, so as to ensure accurate tooth centers and guide the localization of tooth instances. Then in the local stage, DenseASPP-UNet is proposed for fine segmentation and classification of individual tooth. Further, in order to improve the accuracy of tooth segmentation boundary and refine the boundaries of overlapped teeth, a boundary-aware dice loss and a novel label optimization are also applied in our method. Comparative experiments show that the proposed framework exhibits high segmentation performance and outperforms the state-of-the-art methods.
|
|
15:00-15:15, Paper MoPaO2.3 | Add to My Program |
Mask Mining for Improved Liver Lesion Segmentation |
|
Roth, Karsten | HCI/IWR Heidelberg |
Hesser, Juergen | Heidelberg University |
Konopczynski, Tomasz | Heidelberg University |
Keywords: Image segmentation, Machine learning, Liver
Abstract: We propose a novel procedure to improve liver and lesion segmentation from CT scans for U-Net based models. Our method extends standard segmentation pipelines to focus on higher target recall or reduction of noisy false-positive predictions, boosting overall segmentation performance. To achieve this, we include segmentation errors into a new learning process appended to the main training setup, allowing the model to find features which explain away previous errors. We evaluate this on semantically distinct architectures: cascaded two- and three-dimensional as well as combined learning setups for multitask segmentation. Liver and lesion segmentation data are provided by the Liver Tumor Segmentation challenge (LiTS), with an increase in dice score of up to 2 points.
|
|
15:15-15:30, Paper MoPaO2.4 | Add to My Program |
J Regularization Improves Imbalanced Multiclass Segmentation |
|
Guerrero Pena, Fidel A. | California Institute of Technology |
Fernandez, Pedro D. Marrero | Universidade Federal De Pernambuco |
Tarr, Paul | California Institute of Technology |
Ing Ren, Tsang | Universidade Federal De Pernambuco |
Meyerowitz, Elliot M. | California Institute of Technology |
Cunha, Alexandre | California Institute of Technology |
Keywords: Image segmentation, Machine learning, Cells & molecules
Abstract: We propose a new loss formulation to further advance the multiclass segmentation of cluttered cells under weakly supervised conditions. When adding a Youden’s J statistic regularization term to the cross entropy loss we improve the separation of touching and immediate cells, obtaining sharp segmentation boundaries with high adequacy. This regularization intrinsically supports class imbalance thus eliminating the necessity of explicitly using weights to balance training. Simulations demonstrate this capability and show how the regularization leads to correct results by helping advancing the optimization when cross entropy stagnates. We build upon our previous work on multiclass segmentation by adding yet another training class representing gaps between adjacent cells. This addition helps the classifier identify narrow gaps as background and no longer as touching regions. We present results of our methods for 2D and 3D images, from bright field images to confocal stacks containing different types of cells, and we show that they accurately segment individual cells after training with a limited number of images, some of which are poorly annotated.
|
|
15:30-15:45, Paper MoPaO2.5 | Add to My Program |
Evaluating Multi-Class Segmentation Errors with Anatomical Priors |
|
Wang, Xiaoqian | Peking University |
Zhang, Qianyi | DeepWise |
Zhou, Zhen | Deepwise Inc |
Liu, Feng | Deepwise Healthcare |
Yu, Yizhou | Deepwise Healthcare |
Wang, Yizhou | Peking University |
Keywords: Modeling - Anatomical, physiological and pathological, Image segmentation, Computer-aided detection and diagnosis (CAD)
Abstract: Acquiring large scale annotations is challenging in medical image analysis because of the limited number of qualified annotators. Thus, it is essential to achieve high performance using a small number of labeled data, where the key lies in mining the most informative samples to annotate. In this paper, we propose two effective metrics which leverage anatomical priors to evaluate multi-class segmentation methods without ground truth (GT). Together with our smooth margin loss, these metrics can help to mine the most informative samples for training. In experiments, first we demonstrate the proposed metrics can clearly distinguish samples with different degree of errors in the task of pulmonary lobe segmentation. And then we show that our metrics synergized with the proposed loss function can reach the Pearson Correlation Coefficient (PCC) of 0.7447 with mean surface distance (MSD) and -0.5976 with Dice score, which implies the proposed metrics can be used to evaluate segmentation methods. Finally, we utilize our metrics as sample selection criteria in an active learning setting, which shows that the model trained with our anatomy based query achieves comparable performance with the one trained with random query and uncertainty based query using more annotated training data.
|
|
15:45-16:00, Paper MoPaO2.6 | Add to My Program |
Learning a Loss Function for Segmentation: A Feasibility Study |
|
Moltz, Jan Hendrik | Fraunhofer Institute for Digital Medicine MEVIS |
Hänsch, Annika | Fraunhofer Institute for Digital Medicine MEVIS |
Lassen-Schmidt, Bianca | Fraunhofer Institute for Digital Medicine MEVIS |
Haas, Benjamin | Varian Medical Systems Imaging Laboratory GmbH |
Genghi, Angelo | Varian Medical Systems Imaging Laboratory GmbH |
Schreier, Jan | Varian Medical Systems Finland Oy |
Morgas, Tomasz | Varian Medical Systems |
Klein, Jan | Fraunhofer Institute for Digital Medicine MEVIS |
Keywords: Image segmentation, Machine learning, Computed tomography (CT)
Abstract: When training neural networks for segmentation, the Dice loss is typically used. Alternative loss functions could help the networks achieve results with higher user acceptance and lower correction effort, but they cannot be used directly if they are not differentiable. As a solution, we propose to train a regression network to approximate the loss function and combine it with a U-Net to compute the loss during segmentation training. As an example, we introduce the contour Dice coefficient (CDC) that estimates the fraction of contour length that needs correction. Applied to CT bladder segmentation, we show that a weighted combination of Dice and CDC loss improves segmentations compared to using only Dice loss, with regard to both CDC and other metrics.
|
|
MoPaO3 Oral Session, Oakdale IV-V |
Add to My Program |
Detection, Tracking and Motion Estimation |
|
|
Chair: Larrabide, Ignacio | Pladema-CONICET, UNICEN |
Co-Chair: Paul-Gilloteaux, Perrine | CNRS |
|
14:30-14:45, Paper MoPaO3.1 | Add to My Program |
FRR-Net: Fast Recurrent Residual Networks for Real-Time Catheter Segmentation and Tracking in Endovascular Aneurysm Repair |
|
Zhou, Yan-Jie | Institute of Automation, Chinese Academy of Sciences |
Xie, Xiao-Liang | Chinese Academy of Sciences |
Hou, Zeng-Guang | Institute of Automation, Chinese Academy of Sciences |
Bian, Gui-Bin | Institute of Automation, Chinese Academy of Sciences |
Liu, Shiqi | The State Key Laboratory of Management and Control for Complex S |
Zhou, Xiao-Hu | Institute of Automation, Chinese Academy of Sciences |
Keywords: Image segmentation, X-ray imaging
Abstract: For endovascular aneurysm repair (EVAR), real-time and accurate segmentation and tracking of interventional instruments can aid in reducing radiation exposure, contrast agents and procedure time. Nevertheless, this task often comes with the challenges of the slender deformable structures with low contrast in noisy X-ray fluoroscopy. In this paper, a novel efficient network architecture, termed FRR-Net, is proposed for real-time catheter segmentation and tracking. The novelties of FRR-Net lie in the manner in which recurrent convolutional layers ensures better feature representation and the pre-trained lightweight components can improve model processing speed while ensuring performance. Quantitative and qualitative evaluation on images from 175 X-ray sequences of 30 patients demonstrate that the proposed approach significantly outperforms simpler baselines as well as the best previously-published result for this task, achieving the state-of-the-art performance.
|
|
14:45-15:00, Paper MoPaO3.2 | Add to My Program |
3D Optical Flow Estimation Combining 3D Census Signature and Total Variation Regularization |
|
Manandhar, Sandeep | Inria |
Bouthemy, Patrick | Inria |
Welf, Erik | UT Southwestern Medical Center |
Roudot, Philippe | UT Southwestern Medical Center |
Kervrann, Charles | Inria |
Keywords: Quantification and estimation, Single cell & molecule detection, Microscopy - Light, Confocal, Fluorescence
Abstract: We present a 3D variational optical flow method for fluorescence image sequences which preserves discontinuities in the computed flow field. We propose to minimize an energy function composed of a linearized 3D Census signature-based data term and a total variational (TV) regularizer. To demonstrate the efficiency of our method, we have applied real sequences depicting collagen network, where the motion field is expected to be discontinuous. We also favorably compare our results with two other motion estimation methods.
|
|
15:00-15:15, Paper MoPaO3.3 | Add to My Program |
Tracking of Particles in Fluorescence Microscopy Images Using a Spatial Distance Model for Brownian Motion |
|
Spilger, Roman | Heidelberg University |
Hellgoth, Jonas | Heidelberg University, BioQuant, IPMB |
Lee, Ji Young | University of Heidelberg |
Hänselmann, Siegfried | Heidelberg University |
Herten, Dirk-Peter | Heidelberg University |
Bartenschlager, Ralf | University of Heidelberg |
Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
Keywords: Tracking (time series analysis), Microscopy - Light, Confocal, Fluorescence
Abstract: Automatic tracking of particles in fluorescence microscopy images is an important task to quantify the dynamic behavior of subcellular and virus structures. We present a novel iterative approach for tracking multiple particles in microscopy data based on a spatial distance model derived under Brownian motion. Our approach exploits the information that the most likely object position at the next time point is at a certain distance from the current position. Information from all particles in a temporal image sequence are combined and all motion-specific parameters are automatically estimated. Experiments using data of the Particle Tracking Challenge as well as real live cell microscopy data displaying hepatocyte growth factor receptors and virus structures show that our approach outperforms previous methods.
|
|
15:15-15:30, Paper MoPaO3.4 | Add to My Program |
Nuclei Segmentation Using Mixed Points and Masks Selected from Uncertainty |
|
Qu, Hui | Rutgers University |
Yi, Jingru | Rutgers University |
Huang, Qiaoying | Rutgers University |
Wu, Pengxiang | Rutgers University |
Metaxas, Dimitris | Rutgers University |
Keywords: Image segmentation, Histopathology imaging (e.g. whole slide imaging), Cells & molecules
Abstract: Weakly supervised learning has drawn much attention to mitigate the manual effort of annotating pixel-level labels for segmentation tasks. In nuclei segmentation, point annotation has been successfully used for training. However, points lack the shape information. Thus the segmentation of nuclei with non-uniform color is unsatisfactory. In this paper, we propose a framework of weakly supervised nuclei segmentation using mixed points and masks annotation. To save the extra annotation effort, we select typical nuclei to annotate masks from uncertainty map. Using Bayesian deep learning tools, we first train a model with points annotation to predict the uncertainty. Then we utilize the uncertainty map to select the representative hard nuclei for mask annotation automatically. The selected nuclear masks are combined with points to train a better segmentation model. Experimental results on two nuclei segmentation datasets prove the effectiveness of our method. The code is publicly available.
|
|
15:30-15:45, Paper MoPaO3.5 | Add to My Program |
Deep Learning Particle Detection for Probabilistic Tracking in Fluorescence Microscopy Images |
|
Ritter, Christian | University of Heidelberg, DKFZ Heidelberg |
Wollmann, Thomas | University of Heidelberg, DKFZ Heidelberg |
Lee, Ji Young | University of Heidelberg |
Bartenschlager, Ralf | University of Heidelberg |
Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
Keywords: Microscopy - Light, Confocal, Fluorescence, Tracking (time series analysis), Cells & molecules
Abstract: Automatic tracking of subcellular structures displayed as small spots in fluorescence microscopy images is important to quantify biological processes. We have developed a novel approach for tracking multiple fluorescent particles based on deep learning and Bayesian sequential estimation. Our approach combines a convolutional neural network for particle detection with probabilistic data association. We identified data association parameters that depend on the detection result, and automatically determine these parameters by hyperparameter optimization. We evaluated our approach based on image sequences of the Particle Tracking Challenge as well as live cell fluorescence microscopy data of hepatitis C virus proteins. It turned out that the new approach generally outperforms existing methods.
|
|
15:45-16:00, Paper MoPaO3.6 | Add to My Program |
Volumetric Landmark Detection with a Multi-Scale Shift Equivariant Neural Network |
|
Ma, Tianyu | Cornell University |
Gupta, Ajay | Weill Cornell Medical College, NewYork-Presbyterian Hospital |
Sabuncu, Mert | Cornell University |
Keywords: Machine learning, Computed tomography (CT), Vessels
Abstract: Deep neural networks yield promising results in a wide range of computer vision applications, including landmark detection. A major challenge for accurate anatomical landmark detection in volumetric images such as clinical CT scans is that large-scale data often constrain the capacity of the employed neural network architecture due to GPU memory limitations, which in turn can limit the precision of the output. We propose a multi-scale, end-to-end deep learning method that achieves fast and memory-efficient landmark detection in 3D images. Our architecture consists of blocks of shift-equivariant networks, each of which performs landmark detection at a different spatial scale. These blocks are connected from coarse to fine-scale, with differentiable resampling layers, so that all levels can be trained together. We also present a noise injection strategy that increases the robustness of the model and allows us to quantify uncertainty at test time. We evaluate our method for carotid artery bifurcations detection on 263 CT volumes and achieve a better than state-of-the-art accuracy with mean Euclidean distance error of 2.81mm.
|
|
MoPbPo Poster Session, Oakdale Foyer Coral Foyer |
|
Monday Poster PM |
|
|
|
16:00-17:30, Subsession MoPbPo-01, Oakdale Foyer Coral Foyer | |
Brain Connectivity II Poster Session, 7 papers |
|
16:00-17:30, Subsession MoPbPo-02, Oakdale Foyer Coral Foyer | |
FMRI Analysis I Poster Session, 8 papers |
|
16:00-17:30, Subsession MoPbPo-03, Oakdale Foyer Coral Foyer | |
MRI Reconstruction Methods I Poster Session, 8 papers |
|
16:00-17:30, Subsession MoPbPo-04, Oakdale Foyer Coral Foyer | |
Computer-Aided Detection and Diagnosis I Poster Session, 8 papers |
|
16:00-17:30, Subsession MoPbPo-05, Oakdale Foyer Coral Foyer | |
DL/CNN Methods and Models I Poster Session, 8 papers |
|
16:00-17:30, Subsession MoPbPo-06, Oakdale Foyer Coral Foyer | |
Segmentation – Methods & Applications II Poster Session, 9 papers |
|
16:00-17:30, Subsession MoPbPo-07, Oakdale Foyer Coral Foyer | |
Electron Microscopy Poster Session, 4 papers |
|
16:00-17:30, Subsession MoPbPo-08, Oakdale Foyer Coral Foyer | |
Eye and Retinal Imaging Poster Session, 10 papers |
|
16:00-17:30, Subsession MoPbPo-09, Oakdale Foyer Coral Foyer | |
Histopathology II Poster Session, 7 papers |
|
16:00-17:30, Subsession MoPbPo-10, Oakdale Foyer Coral Foyer | |
Abstract Posters: CT Imaging Poster Session, 6 papers |
|
16:00-17:30, Subsession MoPbPo-11, Oakdale Foyer Coral Foyer | |
Abstract Posters: Software and Databases Poster Session, 7 papers |
|
MoPbPo-01 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Brain Connectivity II |
|
|
Chair: Babadi, Behtash | University of Maryland |
Co-Chair: Plourde, Eric | Universite De Sherbrooke |
|
16:00-17:30, Paper MoPbPo-01.1 | Add to My Program |
A Univariate Persistent Brain Network Feature Based on the Aggregated Cost of Cycles from the Nested Filtration Networks |
|
Farazi, Mohammad | Arizona State University |
Zhan, Liang | University of Pittsburgh |
Lepore, Natasha | USC / Children's Hospital Los Angeles |
Thompson, Paul | University of Southern California |
Wang, Yalin | Arizona State University |
Keywords: Population analysis, Connectivity analysis
Abstract: A threshold-free feature in brain network analysis can help circumvent the curse of arbitrary network thresholding for binary network conversions. Here, Persistent Homology is the inspiration for defining a new aggregation cost based on the number of cycles, or for tracking the first Betti number in a nested filtration network within the graph. Our theoretical analysis shows that the proposed aggregated cost of cycles (ACC) is monotonically increasing and thus we define a univariate persistent feature based on the shape of ACC. The proposed statistic has advantages compared to the First Betti Number Plot (BNP1), which only tracks the total number of cycles at each filtration. We show that our method is sensitive to both the topology of modular networks and the difference in the number of cycles in a network. Our method outperforms its counterparts in a synthetic dataset, while in a real-world one it achieves results comparable with the BNP1. Our proposed framework enriches univariate measures for discovering brain network dissimilarities for better categorization of distinct stages in Alzheimer’s Disease (AD).
|
|
16:00-17:30, Paper MoPbPo-01.2 | Add to My Program |
Multi Tissue Modelling of Diffusion MRI Signal Reveals Volume Fraction Bias |
|
Frigo, Matteo | Athena Team, Inria Sophia-Antipolis Méditerranée |
Fick, Rutger H.J. | INRIA |
Zucchelli, Mauro | University of Verona |
Deslauriers-Gauthier, Samuel | Université Côte d'Azur, Inria, France |
Deriche, Rachid | INRIA Sophia Antipolis-Méditerranée |
Keywords: Diffusion weighted imaging, Modeling - Knowledge, Brain
Abstract: This paper highlights a systematic bias in white matter tissue microstructure modelling via diffusion MRI that is due to the common, yet inaccurate, assumption that all brain tissues have a similar T2 response. We show that the concept of ``signal fraction'' is more appropriate to describe what have always been referred to as ``volume fraction''. This dichotomy is described from the theoretical point of view by analysing the mathematical formulation of the diffusion MRI signal. We propose a generalized multi tissue modelling framework that allows to compute the actual volume fractions. The Dmipy implementation of this framework is then used to verify the presence of this bias in four classical tissue microstructure models computed on two subjects from the Human Connectome Project database. The proposed paradigm shift exposes the research field of brain tissue microstructure estimation to the necessity of a systematic review of the results obtained in the past that takes into account the difference between the concept of volume fraction and tissue fraction.
|
|
16:00-17:30, Paper MoPbPo-01.3 | Add to My Program |
Brain Network Connectivity from Matching Cortical Feature Densities |
|
Lee, David | UCLA |
Donald, Kirsten Ann | Division of Developmental Paediatrics, Department of Paediatrics |
Dalal, Taykhoom | University of California, Los Angeles |
Wedderburn, Catherine | London School of Hygiene & Tropical Medicine |
Roos, Annerine | Stellenbosh University |
Ipser, Jonathan Claude | University of Cape Town |
Subramoney, Sivenesi | Department of Paediatrics and Child Health, University of Cape T |
Zar, Heather J. | Department of Paediatrics and Child Health, University of Cape T |
Stein, Dan Joseph | University of Cape Town |
Narr, Katherine | University of California, Los Angeles |
Hellemann, Gerhard | Department of Psychiatry and Biobehavioral Sciences, University |
Woods, Roger | University of California, Los Angeles |
Joshi, Shantanu | Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, |
Keywords: Connectivity analysis, Brain, Magnetic resonance imaging (MRI)
Abstract: We present a new method for constructing structural inference brain networks from functional measures of cortical features. Instead of averaging vertex-wise cortical features, we propose the use of full functions of spatial densities of measures such as thickness and use two dimensional pairwise correlations between regions to construct population networks. We show increased within group correlations for both healthy controls and toddlers with prenatal alcohol exposure compared to the existing mean-based correlation approach. Further, we also show significant differences in brain connectivity between the healthy controls and the exposed group.
|
|
16:00-17:30, Paper MoPbPo-01.4 | Add to My Program |
Reinforcement Tractography: A Hybrid Approach for Robust Segmentation of Complex Fiber Bundles |
|
Cabeen, Ryan | University of Southern California |
Toga, Arthur | University of Southern California |
Keywords: Tractography, Modeling - Anatomical, physiological and pathological, Brain
Abstract: We develop and evaluate a novel hybrid tractography algorithm for improved segmentation of complex fiber bundles from diffusion magnetic resonance imaging datasets. We propose an approach inspired by reinforcement learning that combines the strengths of both probabilistic and deterministic tractography to better resolve pathways dominated by crossing fibers. Given a fiber bundle query, our approach first explores an array of possible pathways probabilistically, and then exploits this information with streamline tractography using globally optimal fiber compartment assignment in a conditional random field. We quantitatively evaluated our approach in comparison with deterministic and probabilistic approaches using a realistic phantom with Tractometer and 88 test-retest scans from the Human Connectome Project. We found that the proposed hybrid method offers improved accuracy with phantom data and more biologically plausible topographic organization and higher reliability with in vivo data. This demonstrates the benefits of combining tractography approaches and indicates opportunities for integrating reinforcement learning strategies into tractograpy algorithms.
|
|
16:00-17:30, Paper MoPbPo-01.5 | Add to My Program |
Modeling the Topology of Cerebral Microvessels Via Geometric Graph Contraction |
|
Damseh, Rafat | Polytechnique Montreal |
Cheriet, Farida | Ecole Polytechnique of Montreal |
Lesage, Frederic | Polytechnique Montreal |
Keywords: Modeling - Anatomical, physiological and pathological, Microscopy - Multi-photon, Brain
Abstract: Studying the topology of cerebral microvessels has been shown to be essential for understanding the mechanisms underlying neurovascular coupling and brain microphysiology. One can derive topological models of these microvessels after labeling them based on their raw acquisitions from two-photon microscopy (TPM). However, adequate 3D mapping of cerebral microvasculature from TPM remains difficult due to the uneven intensities and shadowing effects. In this paper, we present a novel 2D/3D skeletonization solution to generate topological graph models of microvessels regardless of the quality of their binary maps. Our scheme first constructs a random initial graph encapsulated within the boundary of a binary mask. The vertices of the initial model are then iteratively contracted toward the centerline of microvessels by local connectivity-encoded gravitational forces. At each iteration, the model is decimated through vertices clustering and connectivity surgery processes. Lastly, a refinement algorithm is applied to convert the final decimated model into a curve skeleton. Synthetic angiograms and real TPM datasets are used for evaluation. By comparing against other efficient graphing schemes, we demonstrate that our solution performs better when applied to extract topological information from cerebral microvessel labels.
|
|
16:00-17:30, Paper MoPbPo-01.6 | Add to My Program |
Characterizing Frequency-Selective Network Vulnerability for Alzheimer’s Disease by Identifying Critical Harmonic Patterns |
|
Leinwand, Benjamin | University of North Carolina at Chapel Hill |
Wu, Guorong | University of North Carolina at Chapel Hill |
Pipiras, Vladas | University of North Carolina at Chapel Hill |
Keywords: Connectivity analysis, Probabilistic and statistical models & methods, Brain
Abstract: Alzheimer's disease (AD) is a multi-factor neurodegenerative disease that selectively affects certain regions of the brain while other areas remain unaffected. The underlying mechanisms of this selectivity, however, are still largely elusive. To address this challenge, we propose a novel longitudinal network analysis method employing sparse logistic regression to identify frequency-specific oscillation patterns which contribute to the selective network vulnerability for patients at risk of advancing to the more severe stage of dementia. We fit and apply our statistical method to more than 100 longitudinal brain networks, and validate it on synthetic data. A set of critical connectome pathways are identified that exhibit strong association to the progression of AD.
|
|
16:00-17:30, Paper MoPbPo-01.7 | Add to My Program |
A Computational Diffusion Mri Framework for Biomarker Discovery in a Rodent Model of Post-Traumatic Epileptogenesis |
|
Cabeen, Ryan | Keck School of Medicine of USC |
Immonen, Riikka | University of Eastern Finland |
Harris, Neil G | UCLA |
Grohn, Olli | University of Eastern Finland |
Smith, Gregory | UCLA |
Manninen, Eppu | University of Eastern Finland |
Garner, Rachael | University of Southern California |
Duncan, Dominique | Yale University |
Pitkanen, Asla | University of Eastern Finland |
Toga, Arthur | University of Southern California |
Keywords: Modeling - Anatomical, physiological and pathological, Animal models and imaging, Diffusion weighted imaging
Abstract: Epilepsy is a debilitating neurological disorder that directly impacts millions of people and exerts a tremendous economic burden on society at large. While traumatic brain injury (TBI) is a common cause, there remain many open questions regarding its pathological mechanism. The goal of the Epilepsy Bioinformatics Study for Antiepileptogenic Therapy (EpiBioS4Rx) is to identify epileptogenic biomarkers through a comprehensive project spanning multiple species, modalities, and research institutions; in particular, diffusion magnetic resonance imaging (MRI) is a critical component, as it probes tissue microstructure and structural connectivity. The project includes in vivo imaging of a rodent fluid-percussion model of TBI, and we developed a computational diffusion MRI framework for EpiBioS4Rx which employs advanced techniques for preprocessing, modeling, spatial normalization, region analysis, and tractography to derive imaging metrics at group and individual levels. We describe the system's design, present characteristic results from a longitudinal cohort, and discuss its role in biomarker discovery and further studies.
|
|
MoPbPo-02 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
FMRI Analysis I |
|
|
Chair: Caruyer, Emmanuel | Univ Rennes, Inria, CNRS, IRISA |
|
16:00-17:30, Paper MoPbPo-02.1 | Add to My Program |
Impact of 1D and 2D Visualisation on EEG-fMRI Neurofeedback Training During a Motor Imagery Task |
|
Cury, Claire | Inria Rennes |
Lioi, Giulia | INRIA, Univ Rennes1 |
Perronnet, Lorraine | Sword |
Lécuyer, Anatole | IRISA-INRIA Rennes |
Maurel, Pierre | Université De Rennes 1 |
Barillot, Christian | Irisa (umr Cnrs 6074), Inria, Inserm |
Keywords: EEG & MEG, Functional imaging (e.g. fMRI), Multi-modality fusion
Abstract: Bi-modal EEG-fMRI neurofeedback (NF) is a new technique of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant's brain activity; Second, it has potential to better understand the link and the synergy between the two modalities (EEG-fMRI). However there are different ways to show to the participant his NF scores during bi-modal NF sessions. To improve data fusion methodologies, we investigate the impact of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better synergy between EEG and fMRI when a 2D display is used. Subjects have better fMRI scores when 1D is used for bi-modal EEG-fMRI NF sessions; on the other hand, they regulate EEG more specifically when the 2D metaphor is used.
|
|
16:00-17:30, Paper MoPbPo-02.2 | Add to My Program |
Architectural Hyperparameters, Atlas Granularity and Functional Connectivity with Diagnostic Value in Autism Spectrum Disorder |
|
Mellema, Cooper | University of Texas Southwestern Medical Center |
Treacher, Alex | University of Texas Southwestern Medical Center |
Nguyen, Kevin | UT Southwestern |
Montillo, Albert | UT Southwestern |
Keywords: Functional imaging (e.g. fMRI), Brain, Machine learning
Abstract: Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of predictive models, such as deep learning models, from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once an accurate diagnostic model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. Identifying new important features extends our understanding of the biological underpinnings of ASD, while identifying features that corroborate past findings and extend across atlas levels instills model confidence. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions whose abnormal development are associated with deficits in social and sensory processing common in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD.
|
|
16:00-17:30, Paper MoPbPo-02.3 | Add to My Program |
Anatomically Informed Bayesian Spatial Priors for fMRI Analysis |
|
Abramian, David | Linköping University |
Sidén, Per | Linköping University |
Knutsson, Hans | Linköping University |
Villani, Mattias | Linköping University |
Eklund, Anders | Linköping University |
Keywords: Probabilistic and statistical models & methods, fMRI analysis
Abstract: Existing Bayesian spatial priors for functional magnetic resonance imaging (fMRI) data correspond to stationary isotropic smoothing filters that may oversmooth at anatomical boundaries. We propose two anatomically informed Bayesian spatial models for fMRI data with local smoothing in each voxel based on a tensor field estimated from a T1-weighted anatomical image. We show that our anatomically informed Bayesian spatial models results in posterior probability maps that follow the anatomical structure.
|
|
16:00-17:30, Paper MoPbPo-02.4 | Add to My Program |
Dynamic Missing-Data Completion Reduces Leakage of Motion Artifact Caused by Temporal Filtering That Remains after Scrubbing |
|
Guler, Seyhmus | Northeastern University |
Erem, Burak | Boston Children's Hospital and Harvard Medical School |
Afacan, Onur | Harvard Medical School |
Cohen, Alexander, L. | Washintgon Univ. in St. Louis |
Warfield, Simon K. | Harvard Medical School |
Keywords: Functional imaging (e.g. fMRI), Brain, Motion compensation and analysis
Abstract: Functional magnetic resonance imaging (fMRI) is commonly used to better understand brain function. Data becomes contaminated with motion artifact when a subject moves during an fMRI acquisition. Numerous methods have been suggested to target motion artifacts in fMRI. One of these methods, “scrubbing”, removes motion-corrupted volumes but must be performed after temporal filtering since it creates temporal discontinuities. Thus, it does not prevent the spread of corrupted time samples from high motion volumes to their neighbors during temporal filtering. To mitigate this spread, which we refer to as ``leakage'', we propose a novel method, Dynamic Missing-data Completion (DMC), that replaces motion-corrupted volumes with synthetic data before temporal filtering. We analyzed the effect of DMC on an exemplary timeseries from a resting state fMRI (rsfMRI) and compared functional connectivity results of six rsfMRI scans from a single subject with different levels of subject motion. Our results suggest that DMC provides added benefit in further reduction of motion contamination that remains after scrubbing. DMC reduced the standard deviation of signal near scrubbed volumes by about 10% compared to scrubbing only, yielding this average closer to that of uncorrupted no motion volumes.
|
|
16:00-17:30, Paper MoPbPo-02.5 | Add to My Program |
A Temporal Model for Task-Based Functional MR Images |
|
Lin, Claire Yilin | University of Michigan |
Noll, Douglas C. | University of Michigan |
Fessler, Jeff | Univ. Michigan |
Keywords: Functional imaging (e.g. fMRI), Brain, Modeling - Image formation
Abstract: To better identify task-activated brain regions in task-based functional magnetic resonance imaging (tb-fMRI), various space-time models have been used to reconstruct image sequences from k-space data. These models decompose a fMRI timecourse into a static background and a dynamic foreground, aiming to separate task-correlated components from non-task signals. This paper proposes a model based on assumptions of the activation waveform shape and smoothness of the timecourse, and compare it to two contemporary tb-fMRI decomposition models. We experiment in the image domain using a simulated task with known region of interest, and a real visual task. The proposed model yields fewer false activations in task activation maps.
|
|
16:00-17:30, Paper MoPbPo-02.6 | Add to My Program |
Learning Latent Structure Over Deep Fusion Model of Mild Cognitive Impairment |
|
Wang, Li | University of Texas at Arlington, Department of Mathematics |
Zhang, Lu | The University of Texas at Arlington |
Zhu, Dajiang | University of Texas at Arlington |
Keywords: Brain, Functional imaging (e.g. fMRI)
Abstract: Many computational models have been developed to understand Alzheimer’s disease (AD) and its precursor - mild cognitive impairment (MCI) using non-invasive neural imaging techniques, i.e. magnetic resonance imaging (MRI) based imaging modalities. Most existing methods focused on identification of imaging biomarkers, classification/ prediction of different clinical stages, regression of cognitive scores, or their combination as multi-task learning. Given the widely existed individual variability, however, it is still challenging to consider different learning tasks simultaneously even they share a similar goal: exploring the intrinsic alteration patterns in AD/MCI patients. Moreover, AD is a progressive neurodegenerative disorder with a long preclinical period. Besides conducting simple classification, brain changes should be considered within the entire AD/MCI progression process. Here, we introduced a novel deep fusion model for MCI using functional MRI data. We integrated autoencoder, multi-class classification and structure learning into a single deep model. During the modeling, different clinical groups including normal controls, early MCI and late MCI are considered simultaneously. With the learned discriminative representations, we not only can achieve a satisfied classification performance, but also construct a tree structure of MCI progressions.
|
|
16:00-17:30, Paper MoPbPo-02.7 | Add to My Program |
Improved Motion Correction for Functional MRI Using an Omnibus Regression Model |
|
Raval, Vyom | The University of Texas at Dallas |
Nguyen, Kevin | UT Southwestern |
Mellema, Cooper | University of Texas Southwestern Medical Center |
Montillo, Albert | UT Southwestern |
Keywords: Image enhancement/restoration(noise and artifact reduction), fMRI analysis, Functional imaging (e.g. fMRI)
Abstract: Head motion during functional Magnetic Resonance Imaging acquisition can significantly contaminate the neural signal and introduce spurious, distance-dependent changes in signal correlations. This can heavily confound studies of development, aging, and disease. Previous approaches to suppress head motion artifacts have involved sequential regression of nuisance covariates, but this has been shown to reintroduce artifacts. We propose a new motion correction pipeline using an omnibus regression model that avoids this problem by simultaneously capturing multiple artifact sources using the best performing algorithms for each artifact. We quantitatively evaluate its motion artifact suppression performance against sequential regression pipelines using a large heterogeneous dataset (n=151) which includes high-motion subjects and multiple disease phenotypes. The proposed concatenated regression pipeline significantly reduces the association between head motion and functional connectivity while significantly outperforming the traditional sequential regression pipelines in eliminating distance-dependent head motion artifacts.
|
|
16:00-17:30, Paper MoPbPo-02.8 | Add to My Program |
Longitudinal Analysis of Mild Cognitive Impairment Via Sparse Smooth Network and Attention-Based Stacked Bi-Directional Long Short Term Memory |
|
Liu, Dongdong | Shenzhen University |
Xu, Frank Yanwu | Baidu Online Network Technology (Beijing) Co. Ltd |
Elazab, Ahmed | Shenzhen University |
Yang, Peng | Shenzheng University |
Wang, Wei | Shenzhen University |
Wang, Tianfu | Shenzhen University |
Lei, Baiying | Shenzhen University |
Keywords: Functional imaging (e.g. fMRI), Brain, fMRI analysis
Abstract: Alzheimer's disease (AD) is a common irreversible neurodegenerative disease among elderlies. To identify the early stage of AD (i.e., mild cognitive impairment, MCI), many recent studies in the literature use only a single time point and ignore the conducive multi-time points information. Therefore, we propose a novel method that combines multi-time sparse smooth network with long short-term memory (LSTM) network to identify early and late MCIs from multi-time points of resting-state functional magnetic resonance imaging (rs-fMRI). Specifically, we first construct the sparse smooth brain network from rs-fMRI data at different time-points, then an attention based stacked bidirectional LSTM is used to extract features and analyze them longitudinally. Finally, we classify them using Softmax classifier. The proposed method is evaluated on the public Alzheimer's Disease Neuroimaging Phase II (ADNI-2) database and demonstrates the impressive erformance compared with the state-of-the-art methods.
|
|
MoPbPo-03 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
MRI Reconstruction Methods I |
|
|
Chair: Lingala, Sajan Goud | The University of Iowa |
Co-Chair: Zhao, Bo | MGH/HST Athinoula Martinos Center for Biomedical Imaging, Harvard Medical School |
|
16:00-17:30, Paper MoPbPo-03.1 | Add to My Program |
Dynamic MRI Using Deep Manifold Self-Learning |
|
Ahmed, Abdul Haseeb | University of Iowa |
Aggarwal, Hemant Kumar | University of Iowa |
Nagpal, Prashant | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Magnetic resonance imaging (MRI), Image reconstruction - analytical & iterative methods, Machine learning
Abstract: We propose a deep self-learning algorithm to learn the manifold structure of free-breathing and ungated cardiac data and to recover the cardiac CINE MRI from highly undersampled measurements. Our method learns the manifold structure in the dynamic data from navigators using autoencoder network. The trained autoencoder is then used as a prior in the image reconstruction framework. We have tested the proposed method on free-breathing and ungated cardiac CINE data, which is acquired using a navigated golden-angle gradient-echo radial sequence. Results show the ability of our method to better capture the manifold structure, thus providing us reduced spatial and temporal blurring as compared to the SToRM reconstruction.
|
|
16:00-17:30, Paper MoPbPo-03.2 | Add to My Program |
Multi-Scale Unrolled Deep Learning Framework for Accelerated Magnetic Resonance Imaging |
|
Nakarmi, Ukash | Stanford University |
Cheng, Joseph | Stanford University |
Rios, Edgar | Stanford University |
Mardani, Morteza | University of Minnesota |
John, Pauly | Stanford University |
Ying, Leslie | The State University of New York at Buffalo |
Vasanawala, Shreyas | Stanford University |
Keywords: Magnetic resonance imaging (MRI), Machine learning, Image reconstruction - analytical & iterative methods
Abstract: Accelerating data acquisition in magnetic resonance imaging (MRI) has been of perennial interest due to its prohibitively slow data acquisition process. Recent trends in accelerating MRI employ data-centric deep learning frameworks due to its fast inference time and ‘one-parameter-fit-all’ principle unlike in traditional model-based acceleration techniques. Unrolled deep learning framework that combines the deep priors and model knowledge are robust compared to naive deep learning-based framework. In this paper, we propose a novel multiscale unrolled deep learning framework which learns deep image priors through multi-scale CNN and is combined with unrolled framework to enforce data-consistency and model knowledge. Essentially, this framework combines the best of both learning paradigms:model-based and data-centric learning paradigms. Proposed method is verified using several experiments on numerous data sets.
|
|
16:00-17:30, Paper MoPbPo-03.3 | Add to My Program |
R-fMRI Reconstruction from K-T Undersampled Simultaneous-Multislice (SMS) MRI with Controlled Aliasing: Towards Higher Spatial Resolution |
|
Kulkarni, Prachi H. | Indian Institute of Technology Bombay |
Gupta, Kratika | Indian Institute of Techonology Bombay |
Merchant, Shabbir | IIT Bombay |
Awate, Suyash P | Indian Institute of Technology (IIT), Bombay |
Keywords: Image reconstruction - analytical & iterative methods, Probabilistic and statistical models & methods, Functional imaging (e.g. fMRI)
Abstract: Accelerated resting-state functional magnetic resonance imaging (R-fMRI) can provide higher spatial resolution and improved brain connectivity maps. Current methods for fast R-fMRI rely on either fully-sampled parallel imaging or undersampled reconstruction using signal priors, but not both. We propose a novel Bayesian reconstruction framework that combines simultaneous multislice (SMS) imaging, controlled aliasing, and undersampling in k-space and time to reconstruct high-quality signals and connectivity maps. We use a generative dictionary model on R-fMRI time-series, which is robust to signal fluctuations and artifacts, adapts to intersubject variations through optimized similarity transforms on its atoms, and uses spatially regularized sparsity. Results on simulated and clinical R-fMRI show that our method gives more accurate reconstructions and connectivity maps than the state of the art, and can enable higher spatial resolution.
|
|
16:00-17:30, Paper MoPbPo-03.4 | Add to My Program |
Convolutional Framework for Accelerated Magnetic Resonance Imaging |
|
Zhao, Shen | The Ohio State University |
Potter, Lee | The Ohio State University, Dept Electrical & Computer Engineerin |
Lee, Kiryung | The Ohio State University |
Ahmad, Rizwan | Ohio State University |
Keywords: Magnetic resonance imaging (MRI), Image reconstruction - analytical & iterative methods, Inverse methods
Abstract: Magnetic Resonance Imaging (MRI) is a noninvasive imaging technique that provides exquisite soft-tissue contrast without using ionizing radiation. The clinical application of MRI may be limited by long data acquisition times; therefore, MR image reconstruction from highly undersampled k-space data has been an active area of research. Many works exploit rank deficiency in a Hankel data matrix to recover unobserved k-space samples; the resulting problem is non-convex, so the choice of numerical algorithm can significantly affect performance, computation, and memory. We present a simple, scalable approach called Convolutional Framework (CF). We demonstrate the feasibility and versatility of CF using measured data from 2D, 3D, and dynamic applications.
|
|
16:00-17:30, Paper MoPbPo-03.5 | Add to My Program |
DC-WCNN: A Deep Cascade of Wavelet Based Convolutional Neural Networks for MR Image Reconstruction |
|
Ramanarayanan, Sriprabha | Healthcare Technology Innovation Center |
Murugesan, Balamurali | Indian Institute of Technology Madras |
Sirukarumbur Shanmugaram, Keerthi Ram | IIT Madras |
Sivaprakasam, Mohanasankar | Indian Institute of Technology Madras |
Keywords: Machine learning, Image reconstruction - analytical & iterative methods, Magnetic resonance imaging (MRI)
Abstract: Several variants of Convolutional Neural Networks (CNN) have been developed for Magnetic Resonance (MR) image reconstruction. Among them, U-Net has shown to be the baseline architecture for MR image reconstruction. However, sub-sampling is performed by its pooling layers, causing information loss which in turn leads to blur and missing fine details in the reconstructed image. We propose a modification to the U-Net architecture to recover fine structures. The proposed network is a wavelet packet transform based encoder-decoder CNN with residual learning called WCNN. The proposed WCNN has discrete wavelet transform instead of pooling and inverse wavelet transform instead of unpooling layers and residual connections. We also propose a deep cascaded framework (DC-WCNN) which consists of cascades of WCNN and k-space data fidelity units to achieve high quality MR reconstruction. Experimental results show that WCNN and DC-WCNN give promising results in terms of evaluation metrics and better recovery of fine details as compared to other methods.
|
|
16:00-17:30, Paper MoPbPo-03.6 | Add to My Program |
Multi-Echo Recovery with Field Inhomogeneity Compensation Using Structured Low-Rank Matrix Completion |
|
Siemonsma, Stephen | University of Iowa |
Kruger, Stanley | University of Iowa |
Balachandrasekaran, Arvind | University of Iowa |
Mani, Merry | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Magnetic resonance imaging (MRI), Compressive sensing & sampling, Computational Imaging
Abstract: Echo-planar imaging (EPI), which is the main workhorse of functional MRI, suffers from field inhomogeneity-induced geometric distortions. The amount of distortion is proportional to the readout duration, which restricts the maximum achievable spatial resolution. The spatially varying nature of the T2* decay makes it challenging for EPI schemes with a single echo time to obtain good sensitivity to functional activations in different brain regions. Despite the use of parallel MRI and multislice acceleration, the number of different echo times that can be acquired in a reasonable TR is limited. The main focus of this work is to introduce a rosette-based acquisition scheme and a structured low-rank reconstruction algorithm to overcome the above challenges. The proposed scheme exploits the exponential structure of the time series to recover distortion-free images from several echoes simultaneously.
|
|
16:00-17:30, Paper MoPbPo-03.7 | Add to My Program |
Fast Automatic Parameter Selection for MRI Reconstruction |
|
Toma, Tanjin Taher | University of Virginia |
Weller, Daniel | University of Virginia |
Keywords: Magnetic resonance imaging (MRI), Machine learning, Image reconstruction - analytical & iterative methods
Abstract: This paper proposes an automatic parameter selection framework for optimizing the performance of parameter-dependent regularized reconstruction algorithms. The proposed approach exploits a convolutional neural network for direct estimation of the regularization parameters from the acquired imaging data. This method can provide very reliable parameter estimates in a computationally efficient way. The effectiveness of the proposed approach is verified on transform-learning-based magnetic resonance image reconstructions of two different publicly available datasets. This experiment qualitatively and quantitatively measures improvement in image reconstruction quality using the proposed parameter selection strategy versus both existing parameter selection solutions and a fully deep-learning reconstruction with limited training data. Based on the experimental results, the proposed method improves average reconstructed image peak signal-to-noise ratio by a dB or more versus all competing methods in both brain and knee datasets, over a range of subsampling factors and input noise levels.
|
|
16:00-17:30, Paper MoPbPo-03.8 | Add to My Program |
Unsupervised Learning for Compressed Sensing MRI Using CycleGAN |
|
Oh, Gyutaek | KAIST |
Sim, Byeongsu | KAIST |
Ye, Jong Chul | Korea Advanced Inst of Science & Tech |
Keywords: Magnetic resonance imaging (MRI), Compressive sensing & sampling, Machine learning
Abstract: Recently, deep learning based approaches for accelerated MRI have been extensively studied due to its high performance and reduced run time complexity. The existing deep learning methods for accelerated MRI are mostly supervised methods, where matched subsampled k-space data and fully sampled k-space data are necessary. However, it is hard to acquire fully sampled k-space data because of long scan time of MRI. Therefore, unsupervised method without matched label data has become a very important research topic. In this paper, we propose an unsupervised method using a novel cycle-consistent generative adversarial network (cycleGAN) with a single deep generator. We show that the proposed cycleGAN architecture can be derived from a dual formulation of the optimal transport with the penalized least squares cost. The results of experiments show that our method can remove aliasing patterns in downsampled MR images without the matched reference data.
|
|
MoPbPo-04 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Computer-Aided Detection and Diagnosis I |
|
|
Chair: Jiang, Xiaoyi | University of Münster |
|
16:00-17:30, Paper MoPbPo-04.1 | Add to My Program |
Diagnosing Colorectal Polyps in the Wild with Capsule Networks |
|
LaLonde, Rodney | University of Central Florida |
Kandel, Pujan | Mayo Clinic Jacksonville |
Spampinato, Concetto | Universita' Di Catania |
Wallace, Michael B. | Mayo Clinic Jacksonville |
Bagci, Ulas | University of Central Florida |
Keywords: Machine learning, Gastrointestinal tract, Endoscopy
Abstract: Colorectal cancer, largely arising from precursor lesions called polyps, remains one of the leading causes of cancer-related death worldwide. Current clinical standards require the resection and histopathological analysis of polyps due to test accuracy and sensitivity of optical biopsy methods falling substantially below recommended levels. In this study, we design a novel capsule network architecture (D-Caps) to improve the viability of optical biopsy of colorectal polyps. Our proposed method introduces several technical novelties including a novel capsule architecture with a capsule-average pooling (CAP) method to improve efficiency in large-scale image classification. We demonstrate improved results over the previous state-of-the-art convolutional neural network (CNN) approach by as much as 43%. This work provides an important benchmark on the new Mayo Polyp dataset, a significantly more challenging and larger dataset than previous polyp studies, with results stratified across all available categories, imaging devices and modalities, and focus modes to promote future direction into AI-driven colorectal cancer screening systems.
|
|
16:00-17:30, Paper MoPbPo-04.2 | Add to My Program |
Tensor-Based Grading: A Novel Patch-Based Grading Approach for the Analysis of Deformation Fields |
|
Hett, Kilian | Vanderbilt University |
Johnson, Hans | The University of Iowa |
Coupe, Pierrick | CNRS UMR 5800, Laboratoire Bordelais De Recherche En Informatiqu |
Paulsen, Jane | The University of Iowa |
Long, Jeffrey D | University of Iowa, Department of Biostatitsics, Iowa City IA, U |
Oguz, Ipek | Vanderbilt University |
Keywords: Computer-aided detection and diagnosis (CAD), Brain, Magnetic resonance imaging (MRI)
Abstract: The improvements in magnetic resonance imaging have led to the development of numerous techniques to better detect structural alterations caused by neurodegenerative diseases. Among these, the patch-based grading framework has been proposed to model local patterns of anatomical changes. This approach is attractive because of its low computational cost and its competitive performance. Other studies have proposed to analyze the deformations of brain structures using tensor-based morphometry, which is a highly interpretable approach. In this work, we propose to combine the advantages of these two approaches by extending the patch-based grading framework with a new tensor-based grading method that enables us to model patterns of local deformation using a log-Euclidean metric. We evaluate our new method in a study of the putamen for the classification of patients with pre-manifest Huntington's disease and healthy controls. Our experiments show a substantial increase in classification accuracy (87.5 pm 0.5 vs. 81.3 pm 0.6) compared to the existing patch-based grading methods, and a good complement to putamen volume, which is a primary imaging-based marker for the study of Huntington's disease.
|
|
16:00-17:30, Paper MoPbPo-04.3 | Add to My Program |
Interpreting Medical Image Classifiers by Optimization Based Counterfactual Impact Analysis |
|
Major, David | VRVis Center for Virtual Reality and Visualization |
Lenis, Dimitrios | VRVis Zentrum Für Virtual Reality Und Visualisierung Forschungs |
Wimmer, Maria | VRVis Center for Virtual Reality and Visualization |
Sluiter, Gert | VRVis Zentrum Für Virtual Reality Und Visualisierung Forschungs |
Berg, Astrid | VRVis Center for Virtual Reality and Visualization |
Bühler, Katja | VRVis Center for Virtual Reality and Visualization |
Keywords: Machine learning, Visualization, Breast
Abstract: Clinical applicability of automated decision support systems depends on a robust, well-understood classification interpretation. Artificial neural networks while achieving class-leading scores fall short in this regard. Therefore, numerous approaches have been proposed that map a salient region of an image to a diagnostic classification. Utilizing heuristic methodology, like blurring and noise, they tend to produce diffuse, sometimes misleading results, hindering their general adoption. In this work we overcome these issues by presenting a model agnostic saliency mapping framework tailored to medical imaging. We replace heuristic techniques with a strong neighborhood conditioned inpainting approach, which avoids anatomically implausible artefacts. We formulate saliency attribution as a map-quality optimization task, enforcing constrained and focused attributions. Experiments on public mammography data show quantitatively and qualitatively more precise localization and clearer conveying results than existing state-of-the-art methods.
|
|
16:00-17:30, Paper MoPbPo-04.4 | Add to My Program |
Visualisation of Medical Image Fusion and Translation for Accurate Diagnosis of High Grade Gliomas |
|
Kumar, Nishant | TU Dresden |
Hoffmann, Nico | Helmholtz-Zentrum Dresden-Rossendorf |
Matthias, Kirsch | Asklepios Kliniken Schildautal Seesen, Abteilung Für Neurochirur |
Gumhold, Stefan | TU Dresden |
Keywords: Visualization, Multi-modality fusion, Image quality assessment
Abstract: The medical image fusion combines two or more modalities into a single view while medical image translation synthesizes new images and assists in data augmentation. Together, these methods help in faster diagnosis of high grade malignant gliomas. However, they might be untrustworthy due to which neurosurgeons demand a robust visualisation tool to verify the reliability of the fusion and translation results before they make pre-operative surgical decisions. In this paper, we propose a novel approach to compute a confidence heat map between the source-target image pair by estimating the information transfer from the source to the target image using the joint probability distribution of the two images. We evaluate several fusion and translation methods using our visualisation procedure and showcase its robustness in enabling neurosurgeons to make finer clinical decisions.
|
|
16:00-17:30, Paper MoPbPo-04.5 | Add to My Program |
Bi-Modal Ultrasound Breast Cancer Diagnosis Via Multi-View Deep Neural Network SVM |
|
Gong, Bangming | Shanghai University |
Shen, Lu | Shanghai University |
Chang, Cai | Fudan University |
Zhou, Shichong | Fudan University |
Zhou, Weijun | First Affiliated Hospital of USTC |
Li, Shuo | Western University |
Shi, Jun | Shanghai University |
Keywords: Computer-aided detection and diagnosis (CAD), Ultrasound, Breast
Abstract: B-mode ultrasound and ultrasound elastography are two routine diagnostic modalities for breast cancer. Unfortunately, few efforts have paid attention to learn bi-modal ultrasound jointly. By combining multi-view deep mapping-based feature representation with SVM-based classification, we proposed a novel integrated deep learning model, multi-view deep neural network support vector machine (MDNNSVM), to achieve breast cancer diagnosis on bi-modal ultrasound. In particular, multi-view representation learning extracts and fuses the various ultrasound characteristics (also including hardness information of soft tissue) effectively to differentiate benign breast lesions from malignant. Further, the SVM-based objective function is used to learn a classifier jointly with DNN to improve diagnostic accuracy significantly. The experimental results on a real-world dataset of breast cancer verify the effectiveness of the MDNNSVM with the best value of classification accuracy (86.36%) and AUC (0.9079).
|
|
16:00-17:30, Paper MoPbPo-04.6 | Add to My Program |
Fully Automatic Computer-Aided Mass Detection and Segmentation Via Pseudo-Color Mammograms and Mask R-CNN |
|
Min, Hang | University of Queensland |
Wilson, Devin | University of Queensland |
Huang, Yinhuang | University of Queensland |
Liu, Siyu | University of Queensland |
Crozier, Stuart | The University of Queensland |
Bradley, Andrew Peter | Queensland University of Technology |
Chandra, Shekhar | University of Queensland |
Keywords: Breast, Computer-aided detection and diagnosis (CAD), X-ray imaging
Abstract: Mammographic mass detection and segmentation are usually performed as serial and separate tasks, with segmentation often only performed on manually confirmed true positive detections in previous studies. We propose a fully-integrated computer-aided detection (CAD) system for simultaneous mammographic mass detection and segmentation without user intervention. The proposed CAD only consists of a pseudo-color image generation and a mass detection-segmentation stage based on Mask R-CNN. Grayscale mammograms are transformed into pseudo-color images based on multi-scale morphological sifting where mass-like patterns are enhanced to improve the performance of Mask R-CNN. Transfer learning with the Mask R-CNN is then adopted to simultaneously detect and segment masses on the pseudo-color images. Evaluated on the public dataset INbreast, the method outperforms the state-of-the-art methods by achieving an average true positive rate of 0.90 at 0.9 false positive per image and an average Dice similarity index of 0.88 for mass segmentation.
|
|
16:00-17:30, Paper MoPbPo-04.7 | Add to My Program |
Reading Mammography with Multiple Prior Exams |
|
Song, Chao | Illinois Institute of Technology |
Sainz de Cea, Maria V. | Illinois Institute of Technology |
Richmond, David | IBM Watson Health |
Keywords: Breast, X-ray imaging, Machine learning
Abstract: Change is one of the strongest features for identifying abnor- mality in screening mammography exams. However, publicly available mammography datasets have so far lacked prior ex- ams, and so the majority of algorithm development has fo- cused on assessing isolated exams, without prior information. Recently, it was shown that a deep learning algorithm can im- prove its diagnostic accuracy by utilizing a single prior mam- mography image. In this work, we extend the previous result to address the issue of reading a variable number of prior ex- ams. We compare two approaches: Random Forest, which requires that all inputs have the same size, and LSTM, which can handle a variable-size input. We demonstrate a signifi- cant performance improvement when using multiple priors, consistent with the standard workflow of breast imagers. We also found for both models that multiple priors improved per- formance over using a single prior. Interestingly, LSTM con- sistently outperformed the Random Forest model, and is more practical because it can naturally process any number of prior images that are available at the time of read. We expect that these results will generalize to other screening programs, such as colorectal cancer, where prior images are readily available.
|
|
16:00-17:30, Paper MoPbPo-04.8 | Add to My Program |
Analysis of the Influence of Diffeomorphic Normalization in the Prediction of Stable vs Progressive MCI Conversion with Convolutional Neural Networks |
|
Ramon-Julvez, Ubaldo | University of Zaragoza |
Monica, Hernandez | University of Zaragoza |
Mayordomo, Elvira | University of Zaragoza |
Keywords: Computer-aided detection and diagnosis (CAD), Image registration, Brain
Abstract: We study the effect of the selection of diffeomorphic normalization in the performance of Spasov's deep-learning system for the problem of progressive MCI vs stable MCI discrimination. We considered different degrees of normalization (no, affine and non-rigid normalization) and two diffeomorphic registration methods (ANTS and BL PDE-LDDMM) with different image similarity metrics (SSD, NCC, and lNCC) yielding qualitatively different deformation models and quantitatively different degrees of registration accuracy. BL PDE-LDDMM NCC achieved the best performing accuracy with median values of 89%. Surprisingly, the accuracy of no and affine normalization was also among the highest, indicating that the deep-learning system is powerful enough to learn accurate models for pMCI vs sMCI discrimination without the need for normalization. However, the best sensitivity values were obtained by BL PDE-LDDMM SSD and NCC with median values of 97% and 94% while the sensitivity of the remaining methods stayed under 88%.
|
|
MoPbPo-05 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
DL/CNN Methods and Models I |
|
|
Chair: Suzuki, Kenji | Illinois Institute of Technology |
Co-Chair: Chun, Se Young | Ulsan National Institute of Science and Technology (UNIST) |
|
16:00-17:30, Paper MoPbPo-05.1 | Add to My Program |
Looking in the Right Place for Anomalies: Explainable AI through Automatic Location Learning |
|
Kashyap, Satyananda | IBM Research |
Karargyris, Alexandros | IBM |
Wu, Joy Tzung-yu | IBM Research - Almaden |
Gur, Yaniv | IBM Almaden Research Center |
Sharma, Arjun | IBM |
Wong, Ken C. L. | IBM Research - Almaden Research Center |
Moradi, Mehdi | IBM Research |
Syeda-Mahmood, Tanveer | IBM Almaden Research Center |
Keywords: X-ray imaging, Machine learning, Classification
Abstract: Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.
|
|
16:00-17:30, Paper MoPbPo-05.2 | Add to My Program |
Deep Learning Based Segmentation of Body Parts in CT Localizers and Application to Scan Planning |
|
Deshpande, Hrishikesh | Philips Research, Hamburg, Germany |
Bergtholdt, Martin | Philips Research Europe, Hamburg |
Gotman, Shlomo | Philips Healthcare |
Saalbach, Axel | Philips GmbH, Innovative Technologies |
Sénégas, Julien | Philips Research |
Keywords: Computed tomography (CT), Image segmentation, Machine learning
Abstract: In this paper, we propose a deep learning approach for the segmentation of body parts in computer tomography (CT) localizer images. Such images pose difficulties in the automatic image analysis on account of variable field-of-view, diverse patient positioning and image acquisition at low dose, but are of importance pertaining to their most prominent applications in scan planning and dose modulation. Following the success of deep learning technology in image segmentation applications, we investigate the use of a fully convolutional neural network architecture to achieve the segmentation of four anatomies: abdomen, chest, pelvis and brain. The method is further extended to generate plan boxes for individual as well as multiple combined anatomies, and compared against the existing techniques. The performance of the method is evaluated on 771 multi-site localizer images.
|
|
16:00-17:30, Paper MoPbPo-05.3 | Add to My Program |
Supervised Augmentation: Leverage Strong Annotation for Limited Data |
|
Zheng, Han | Tencent |
Shang, Hong | Tencent AI Lab |
Sun, Zhongqian | Tencent AI Lab |
Fu, Xinghui | Tencent AI Lab |
Yao, Jianhua | National Institutes of Health |
Huang, Junzhou | University of Texas at Arlington |
Keywords: Machine learning, Computer-aided detection and diagnosis (CAD)
Abstract: A previously less exploited dimension to approach the data scarcity challenge in medical imaging classification is to leverage strong annotation, when available data is limited but the annotation resource is plentiful. Strong annotation at finer level, such as region of interest, carries more information than simple image level annotation, therefore should theoretically improve performance of a classifier. In this work, we explored utilizing strong annotation by developing a new data augmentation method, which improved over common data augmentation (random crop and cutout) by significantly enriching augmentation variety and ensuring valid label given guidance from strong annotation. Experiments on a real world application of classifying gastroscopic images demonstrated that our method outperformed state-of-the-art methods by a large margin at all different settings of data scarcity. Additionally, our method is flexible to integrate with other CNN improvement techniques and handle data with mixed annotation.
|
|
16:00-17:30, Paper MoPbPo-05.4 | Add to My Program |
Exploiting “uncertain” Deep Networks for Data Cleaning in Digital Pathology |
|
Ponzio, Francesco | Politecnico Di Torino |
Giacomo, Deodato | Politecnico Di Torino |
Macii, Enrico | Politecnico Di Torino |
Di Cataldo, Santa | Politecnico Di Torino |
Ficarra, Elisa | Politecnico Di Torino |
Keywords: Histopathology imaging (e.g. whole slide imaging), Gastrointestinal tract, Computer-aided detection and diagnosis (CAD)
Abstract: With the advent of digital pathology, there has been an increasing interest in providing pathologists with machine learning tools, often based on deep learning, to obtain faster and more robust image assessment. Nonetheless, the accuracy of these tools relies on the generation of large training sets of pre-labeled images. This is typically a challenging and cumbersome process, requiring extensive pre-processing to remove spurious samples that may lead the training to failure. Unlike their plain counterparts, which tend to provide overconfident decisions and cannot identify samples they have not been specifically trained for, Bayesian Convolutional Neural Networks provide a reliable measure of classification uncertainty. In this study, we exploit this inherent capability to automatize the data cleaning phase of histopathological image assessment. Our experiments on a case study of Colorectal Cancer image classification demonstrate that our approach can boost the accuracy of downstream classification by 15% at least.
|
|
16:00-17:30, Paper MoPbPo-05.5 | Add to My Program |
DRU-Net: An Efficient Deep Convolutional Neural Network for Medical Image Segmentation |
|
Jafari, Mina | The University of Nottingham |
Auer, Dorothee | University of Nottingham |
Francis, Susan | The University of Nottingham |
Garibaldi, Jon | The University of Nottingham |
Chen, Xin | University of Nottingham |
Keywords: Image segmentation, Machine learning
Abstract: Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs) mainly for object classification tasks. In this paper, we propose an efficient network architecture by considering advantages of both networks. The proposed method is integrated into an encoder-decoder DCNN model for medical image segmentation. Our method adds additional skip connections compared to ResNet but uses significantly fewer model parameters than DenseNet. We evaluate the proposed method on a public dataset (ISIC 2018 grand-challenge) for skin lesion segmentation and a local brain MRI dataset. In comparison with ResNet-based, DenseNet-based and attention network (AttnNet) based methods within the same encoder- decoder network structure, our method achieves significantly higher segmentation accuracy with fewer number of model parameters than DenseNet and AttnNet. The code is available on GitHub (GitHub link: https://github.com/MinaJf/DRU-net).
|
|
16:00-17:30, Paper MoPbPo-05.6 | Add to My Program |
Object Segmentation with Deep Neural Nets Coupled with a Shape Prior, When Learning from a Training Set of Limited Quality and Small Size |
|
Shigwan, Saurabh | Indian Institute of Technology (IIT) Bombay |
Gaikwad, Akshay | Indian Institute of Technology, Bombay |
Awate, Suyash P | Indian Institute of Technology (IIT), Bombay |
Keywords: Image segmentation, Machine learning, Shape analysis
Abstract: Statistical shape priors can be crucial in segmenting objects when the data differentiates poorly between the object and its surroundings. For reliable learning, while some methods need high-quality expert segmentations, other methods need large training sets, both of which can often be difficult to obtain in clinical deployment or scientific studies. We propose to couple deep neural networks with a pointset-based shape prior that can be learned effectively despite training sets having small size and imperfections in expert curation. The prior relies on sparse Riemannian modeling in Kendall shape space. Results on clinical brain magnetic resonance imaging data show that our framework improves over the state of the art in segmenting the thalamus and the caudate.
|
|
16:00-17:30, Paper MoPbPo-05.7 | Add to My Program |
Robust Detection of Adversarial Attacks on Medical Images |
|
Li, Xin | Wayne State University |
Zhu, Dongxiao | Wayne State University |
Keywords: Machine learning, Pattern recognition and classification, X-ray imaging
Abstract: Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical pre- diction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. To tackle this problem, we propose an unsupervised learning approach to detect adversarial attacks on medical images. Our approach is capable of detecting a wide range of adversarial attacks without knowing the attackers nor sacrificing the classification performance. More importantly, our approach can be easily embedded into any deep learning-based medical imaging system as a module to improve the system’s robustness. Experiments on a public chest X-ray dataset demonstrate the strong performance of our approach in defending adversarial attacks under both white-box and black-box settings.
|
|
16:00-17:30, Paper MoPbPo-05.8 | Add to My Program |
Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis against Adversarial Attacks |
|
Anand, Deepak | Indian Institute of Technology Bombay |
Tank, Darshan | Indian Institute of Technology Bombay |
Tibrewal, Harshvardhan | Indian Institute of Technology Bombay |
Sethi, Amit | Indian Institute of Technology Bombay |
Keywords: Computer-aided detection and diagnosis (CAD), Image segmentation, Magnetic resonance imaging (MRI)
Abstract: Deep neural networks are being increasingly used for disease diagnosis and lesion localization in biomedical images. However, training deep neural networks not only requires large sets of expensive ground truth (image labels or pixel annotations), they are also susceptible to adversarial attacks. Transfer learning alleviates the former problem to some extent by pre-training the lower layers of a neural network on a large labeled dataset from a different domain (e.g., ImageNet). In transfer learning, the final few layers are trained on the target domain (e.g., chest X-rays), while the pre-trained layers are only fine-tuned or even kept frozen. An alternative to transfer learning is self-supervised learning in which a supervised task is created by transforming the unlabeled images from the target domain itself. The lower layers are pre-trained to invert the transformation in some sense. In this work, we show that self-supervised learning combined with adversarial training offers additional advantages over transfer learning as well as vanilla self-supervised learning. In particular, the process of adversarial training leads to both a reduction in the amount of supervised data required for comparable accuracy, as well as a natural robustness to adversarial attacks. We support our claims using experiments on the two modalities and tasks -- classification of chest X-rays, and segmentation of MRI images, as well as two types of adversarial attacks -- PGD and FGSM.
|
|
MoPbPo-06 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Segmentation – Methods & Applications II |
|
|
Chair: Jeong, Won-Ki | Ulsan National Institute of Science and Technology (UNIST) |
|
16:00-17:30, Paper MoPbPo-06.1 | Add to My Program |
Combining Shape Priors with Conditional Adversarial Networks for Improved Scapula Segmentation in MR Images |
|
Boutillon, Arnaud | IMT Atlantique, LaTIM |
Borotikar, Bhushan | University of Western Brittany |
Burdin, Valerie | IMT Atlantique/Institut Mines Telecom - INSERM U1101 |
Conze, Pierre-Henri | IMT Atlantique, LaTIM |
Keywords: Image segmentation, Bone, Magnetic resonance imaging (MRI)
Abstract: This paper proposes an automatic method for scapula bone segmentation from Magnetic Resonance (MR) images using deep learning. The purpose of this work is to incorporate anatomical priors into a conditional adversarial framework, given a limited amount of heterogeneous annotated images. Our approach encourages the segmentation model to follow the global anatomical properties of the underlying anatomy through a learnt non-linear shape representation while the adversarial contribution refines the model by promoting realistic delineations. These contributions are evaluated on a dataset of 15 pediatric shoulder examinations, and compared to state-of-the-art architectures including UNet and recent derivatives. The significant improvements achieved bring new perspectives for the pre-operative management of musculo-skeletal diseases.
|
|
16:00-17:30, Paper MoPbPo-06.2 | Add to My Program |
A Generic Ensemble Based Deep Convolutional Neural Network for Semi-Supervised Medical Image Segmentation |
|
Li, Ruizhe | The University of Nottingham |
Auer, Dorothee | University of Nottingham |
Wagner, Christian | University of Nottingham |
Chen, Xin | University of Nottingham |
Keywords: Image segmentation, Machine learning
Abstract: Deep learning based image segmentation has achieved the state-of-the-art performance in many medical applications such as lesion quantification, organ detection, etc. However, most of the methods rely on supervised learning, which require a large set of high-quality labeled data. Data annotation is generally an extremely time-consuming process. To address this problem, we propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN). An encoder-decoder based DCNN is initially trained using a few annotated training samples. This initially trained model is then copied into sub-models and improved iteratively using random subsets of unlabeled data with pseudo labels generated from models trained in the previous iteration. The number of sub-models is gradually decreased to one in the final iteration. We evaluate the proposed method on a public grand-challenge dataset for skin lesion segmentation. Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data. The code is available on Github.
|
|
16:00-17:30, Paper MoPbPo-06.3 | Add to My Program |
Hybrid Cascaded Neural Network for Liver Lesion Segmentation |
|
Dey, Raunak | University of Georgia |
Hong, Yi | University of Georgia |
Keywords: Image segmentation, Liver, Machine learning
Abstract: Automatic liver lesion segmentation is a challenging task while having a significant impact on assisting medical professionals in the designing of effective treatment and planning proper care. In this paper, we propose a cascaded system that combines both 2D and 3D convolutional neural networks to segment hepatic lesions effectively. Our 2D network operates on a slice-by-slice basis in the axial orientation to segment liver and large liver lesions; while we use a 3D network to detect small lesions that are often missed in a 2D segmentation design. We employ this algorithm on the LiTS challenge obtaining a Dice score per subject of 68.1%, which performs the best among all non pre-trained models and the second-best among published methods. We also perform two-fold cross-validation to reveal the over- and under-segmentation issues in the annotations of the LiTS dataset.
|
|
16:00-17:30, Paper MoPbPo-06.4 | Add to My Program |
Robust Automatic Multiple Landmark Detection |
|
Jain, Arjit | Indian Institute of Technology Bombay |
Powers, Alexander | The University of Iowa |
Johnson, Hans | The University of Iowa |
Keywords: Machine learning, Brain, Magnetic resonance imaging (MRI)
Abstract: Reinforcement learning (RL) has proven to be a powerful tool for automatic single landmark detection in 3D medical images. In this work, we extend RL-based single landmark detection to detect multiple landmarks simultaneously in the presence of missing data in the form of defaced 3D head MR images. Our purposed technique is both time-efficient and robust to missing data. We demonstrate that adding auxiliary landmarks can improve the accuracy and robustness of estimating primary target landmark locations. The multi-agent deep Q-network (DQN) approach described here detects landmarks within 2mm, even in the presence of missing data.
|
|
16:00-17:30, Paper MoPbPo-06.5 | Add to My Program |
Neural Network Segmentation of Cell Ultrastructure Using Incomplete Annotation |
|
Francis, John | University of Southern California |
Wang, Hongzhi | IBM Almaden Research Center |
White, Kate | University of Southern California |
Syeda-Mahmood, Tanveer | IBM Almaden Research Center |
Stevens, Raymond | University of Southern California |
Keywords: Image segmentation, Single cell & molecule detection, Computed tomography (CT)
Abstract: The Pancreatic beta cell is an important target in diabetes research. For scalable modeling of beta cell ultastructure, we investigate automatic segmentation of whole cell imaging data acquired through soft X-ray tomography. During the course of the study, both complete and partial ultrastrucutre annotations were produced manually for different subsets of the data. To more effectively use existing annotations, we propose a method that enables the application of partially labeled data for full label segmentation. For experimental validation, we apply our method to train a convolutional neural network with a set of 12 fully annotated data and 12 partially annotated data and show promising improvement over standard training that uses fully annotated data alone.
|
|
16:00-17:30, Paper MoPbPo-06.6 | Add to My Program |
Radiomic Feature Stability Analysis Based on Probabilistic Segmentations |
|
Haarburger, Christoph | RWTH Aachen University |
Schock, Justus | RWTH Aachen University |
Truhn, Daniel | University Hospital Aachen |
Weitz, Philippe | RWTH Aachen University |
Mueller-Franzes, Gustav | RWTH Aachen University |
Weninger, Leon | RWTH Aachen University |
Merhof, Dorit | RWTH Aachen University |
Keywords: Pattern recognition and classification, Liver, Computed tomography (CT)
Abstract: Identifying image features that are robust with respect to segmentation variability and domain shift is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyze radiomics feature stability based on probabilistic segmentations. Based on a public lung cancer dataset, we generate an arbitrary number of plausible segmentations using a Probabilistic U-Net. From these segmentations, we extract a high number of plausible feature vectors for each lung tumor and analyze feature variance with respect to the segmenta- tions. Our results suggest that there are groups of radiomic features that are more (e.g. statistics features) and less (e.g. gray-level size zone matrix features) robust against segmentation variability. Finally, we demonstrate that segmentation variance impacts the performance of a prognostic lung cancer survival model and propose a new and potentially more robust radiomics feature selection workflow.
|
|
16:00-17:30, Paper MoPbPo-06.7 | Add to My Program |
Deep Learning and Unsupervised Fuzzy C-Means Based Level-Set Segmentation for Liver Tumor |
|
Zhang, Yue | Southern University of Science and Technology |
Wu, Jiong | Sun Yat-Sen University |
Jiang, Benxiang | Southern University of Science and Technology |
Ji, Dongcen | Southern University of Science and Technology |
Chen, Yifan | The University of Waikato |
Wu, Ed X. | The University of Hong Kong |
Tang, Xiaoying | Southern University of Science and Technology |
Keywords: Image segmentation, Machine learning, Liver
Abstract: In this paper, we propose and validate a novel level-set method integrating an enhanced edge indicator and an automatically derived initial curve for CT based liver tumor segmentation. In the beginning, a 2D U-net is used to localize the liver and a 3D fully convolutional network (FCN) is used to refine the liver segmentation as well as to localize the tumor. The refined liver segmentation is used to remove non-liver tissues for subsequent tumor segmentation. Given that the tumor segmentation obtained from the aforementioned 3D FCN is typically imperfect, we adopt a novel level-set method to further improve the tumor segmentation. Specifically, the probabilistic distribution of the liver tumor is estimated using fuzzy c-means clustering and then utilized to enhance the object indication function used in level-set. The proposed segmentation pipeline was found to have an outstanding performance in terms of both liver and liver tumor.
|
|
16:00-17:30, Paper MoPbPo-06.8 | Add to My Program |
Lymphoma Segmentation in PET Images Based on Multi-View and Conv3D Fusion Strategy |
|
Hu, Haigen | Zhejiang University of Technology |
Shen, Leizhao | Zhejiang University of Technology |
Zhou, Tongxue | University of Rouen Normandy, LITIS EA 4108, 76183 Rouen, France |
Decazes, Pierre | University of Rouen Normandy, LITIS EA 4108, 76183 Rouen, France |
Vera, Pierre | Centre Henri Becquerel |
Ruan, Su | Universite De Rouen |
Keywords: Nuclear imaging (e.g. PET, SPECT), Whole-body, Image segmentation
Abstract: Due to the poor image information of lymphoma in PET images, it is still a challenge to segment them correctly. In this work, a fusion strategy by 2D multi-view and 3D networks is proposed to take full advantage of available information for segmentation. Firstly, we train three 2D network models from three orthogonal views based on 2D ResUnet, and train a 3D network model by using volumetric data based on 3D ResUnet. Then the obtained preliminary results (three 2D results and one 3D result) are fused by combing the original volumetric data based on a Conv3D fusion strategy. Finally, a series experiments are conducted on lymphoma dataset, and the results show that the proposed multi-view lymphoma co-segmentation scheme is promising, and it can improve the comprehensive performance by combing 2D multi-view and 3D networks.
|
|
16:00-17:30, Paper MoPbPo-06.9 | Add to My Program |
Liver Guided Pancreas Segmentation |
|
Zhang, Yue | Southern University of Science and Technology |
Wu, Jiong | Sun Yat-Sen University |
Wang, Simao | Southern University of Science and Technology |
Liu, Yilong | The University of Hong Kong |
Chen, Yifan | The University of Waikato |
Wu, Ed X. | The University of Hong Kong |
Tang, Xiaoying | Southern University of Science and Technology |
Keywords: Image segmentation, Machine learning, Computed tomography (CT)
Abstract: In this paper, we propose and validate a location prior guided automatic pancreas segmentation framework based on 3D convolutional neural network (CNN). To guide pancreas segmentation, centroid of the pancreas used to determine its bounding box is calculated using the location of the liver which is firstly segmented by a 2D CNN. A linear relationship between centroids of the pancreas and the liver is proposed. After that, a 3D CNN is employed the input of which is the bounding box of the pancreas to get the final segmentation. A publicly accessible pancreas dataset including 54 subjects is used to quantify the performance of the proposed framework. Experimental results reveal outstanding performance of the proposed method in terms of both computational efficiency and segmentation accuracy compared to non-location guided segmentation. To be specific, the running time is 15 times faster and the segmentation accuracy in terms of Dice is higher by 4.29% (76.42% versus 80.71%).
|
|
MoPbPo-07 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Electron Microscopy |
|
|
|
16:00-17:30, Paper MoPbPo-07.1 | Add to My Program |
Adversarial-Prediction Guided Multi-Task Adaptation for Semantic Segmentation of Electron Microscopy Images |
|
Yi, Jiajin | Huaqiao University |
Yuan, Zhimin | Huaqiao University |
Peng, Jialin | Huaqiao University |
Keywords: Image segmentation, Microscopy - Electron, Cells & molecules
Abstract: Semantic segmentation is an essential step for electron microscopy (EM) image analysis. Although supervised models have achieved significant progress, the need for labor intensive pixel-wise annotation is a major limitation. To complicate matters further, supervised learning models may not generalize well on a novel dataset due to domain shift. In this study, we introduce an adversarial-prediction guided multi-task network to learn the adaption of a well-trained model on a source domain for use on an unlabeled target domain. Since no label is available on target domain, we learn an encoding representation not only for the supervised segmentation on source domain but also for unsupervised reconstruction of the target data. To improve the discriminative ability of the supervised learned model on unlabeled target domain, we further guide the representation learning by multi-level adversarial learning in semantic prediction space. Comparisons and ablation study on public benchmark demonstrated state-of-the-art performance and effectiveness of our approach.
|
|
16:00-17:30, Paper MoPbPo-07.2 | Add to My Program |
Synaptic Partner Assignment Using Attentional Voxel Association Networks |
|
Turner, Nicholas | Princeton University |
Lee, Kisuk | Massachusetts Institute of Technology |
Lu, Ran | Princeton University |
Wu, Jinpeng | Princeton University |
Ih, Dodam | Princeton University |
Seung, H. Sebastian | Princeton University |
Keywords: Microscopy - Electron, Brain, Machine learning
Abstract: Connectomics aims to recover a complete set of synaptic connections within a dataset imaged by volume electron microscopy. Many systems have been proposed for locating synapses, and recent research has included a way to identify the synaptic partners that communicate at a synaptic cleft. We reframe the problem of identifying synaptic partners as directly generating the mask of the synaptic partners from a given cleft. We train a convolutional network to perform this task. The network takes the local image context and a binary mask representing a single cleft as input. It is trained to produce two binary output masks: one which labels the voxels of the presynaptic partner within the input image, and another similar labeling for the postsynaptic partner. The cleft mask acts as an attentional gating signal for the network. We find that an implementation of this approach performs well on a dataset of mouse somatosensory cortex, and evaluate it as part of a combined system to predict both clefts and connections.
|
|
16:00-17:30, Paper MoPbPo-07.3 | Add to My Program |
Caesar: Segment-Wise Alignment Method for Solving Discontinuous Deformations |
|
Popovych, Sergiy | Princeton University |
Bae, J. Alexander | Princeton University |
Seung, H. Sebastian | Princeton University |
Keywords: Image registration, Microscopy - Electron, Brain
Abstract: Images obtained from serial section electron microscopy can contain defects that create discontinuous tissue deformation. Fixing such defects during image registration is especially challenging, as classical block matching registration techniques assume smooth motion within each block, and ConvNet based registration techniques must rely on smoothness assumption during training. We propose Caesar, a divide-and-conquer technique that breaks registered images into segments, such that most of discontinuity is confined to segment boundaries. Then, we align the segments independently and stitch the results back together. We provide extensive experimental evaluation on brain tissue serial section microscopy data that shows that segment-wise alignment reduces the average misalignment area around defects by 6-10x.
|
|
16:00-17:30, Paper MoPbPo-07.4 | Add to My Program |
EM-Net: Centerline-Aware Mitochondria Segmentation in EM Images Via Hierarchical View-Ensemble Convolutional Network |
|
Yuan, Zhimin | Huaqiao University |
Yi, Jiajin | Huaqiao University |
Luo, Zhengrong | Huaqiao University |
Jia, Zhongdao | Huaqiao University |
Peng, Jialin | Huaqiao University |
Keywords: Image segmentation, Microscopy - Electron, Cells & molecules
Abstract: Although deep encoder-decoder networks have achieved astonishing performance for mitochondria segmentation from electron microscopy (EM) images, they still produce coarse segmentations with discontinuities and false positives. Besides, the need for labor intensive pixel-wise annotations of large 3D volumes and huge memory overhead by 3D models are also major limitations. To address these problems, we introduce a multi-task network named EM-Net, which includes an auxiliary centerline detection task to account for shape in- formation of mitochondria represented by centerline. There- fore, the centerline detection sub-network is able to enhance the accuracy and robustness of segmentation task, especially when only a small set of annotated data are available. To achieve a light-weight 3D network, we introduce a novel hierarchical view-ensemble convolution module to reduce number of parameters, and facilitate multi-view information ag- gregation. Validations on public benchmark showed state-of- the-art performance by EM-Net. Even with significantly re- duced training data, our method still showed quite promising results.
|
|
MoPbPo-08 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Eye and Retinal Imaging |
|
|
Chair: Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
|
16:00-17:30, Paper MoPbPo-08.1 | Add to My Program |
Inception Capsule Network for Retinal Blood Vessel Segmentation and Centerline Extraction |
|
Kromm, Christian | University of Heidelberg, DKFZ Heidelberg |
Rohr, Karl | Heidelberg University, DKFZ Heidelberg |
Keywords: Image segmentation, Machine learning, Retinal imaging
Abstract: Automatic segmentation and centerline extraction of retinal blood vessels from fundus image data is crucial for early detection of retinal diseases. We have developed a novel deep learning method for segmentation and centerline extraction of retinal blood vessels which is based on the Capsule network in combination with the Inception architecture. Compared to state-of-the-art deep convolutional neural networks, our method has much fewer parameters due to its shallow architecture and generalizes well without using data augmentation. We performed a quantitative evaluation using the DRIVE dataset for both vessel segmentation and centerline extraction. Our method achieved state-of-the-art performance for vessel segmentation and outperformed existing methods for centerline extraction.
|
|
16:00-17:30, Paper MoPbPo-08.2 | Add to My Program |
Sparse-GAN: Sparsity-Constrained Generative Adversarial Network for Retinal OCT Image Anomaly Detection |
|
Zhou, Kang | ShanghaiTech University |
Gao, Shenghua | ShanghaiTech University |
Cheng, Jun | Institute of Biomedical Engineering, Chinese Academy of Sciences |
Gu, Zaiwang | Southern University of Science and Technology |
Fu, Huazhu | Inception Institute of Artificial Intelligence |
Tu, Zhi | ShanghaiTech University |
Yang, Jianlong | Cixi Institute of Biomedical Engineering, Chinese Academy of Sci |
Zhao, Yitian | Chinese Academy of Sciences |
Liu, Jiang | Southern University of Science and Technology |
Keywords: Optical coherence tomography, Machine learning, Image reconstruction - analytical & iterative methods
Abstract: With the development of convolutional neural network, deep learning has shown its success for retinal disease detection from optical coherence tomography (OCT) images. However, deep learning often relies on large scale labelled data for training, which is oftentimes challenging especially for disease with low occurrence. Moreover, a deep learning system trained from data-set with one or a few diseases is unable to detect other unseen diseases, which limits the practical usage of the system in disease screening. To address the limitation, we propose a novel anomaly detection framework termed Sparsity-constrained Generative Adversarial Network (Sparse-GAN) for disease screening where only healthy data are available in the training set. The contributions of Sparse-GAN are two-folds: 1) The proposed Sparse-GAN predicts the anomalies in latent space rather than image-level; 2) Sparse-GAN is constrained by a novel Sparsity Regularization Net. Furthermore, in light of the role of lesions for disease screening, we present to leverage on an anomaly activation map to show the heatmap of lesions. We evaluate our proposed Sparse-GAN on a publicly available dataset, and the results show that the proposed method outperforms the state-of-the-art methods.
|
|
16:00-17:30, Paper MoPbPo-08.3 | Add to My Program |
Learning to Segment Vessels from Poorly Illuminated Fundus Images |
|
Nasery, Vibha | Regeneron Pharmaceuticals |
Soundararajan, Krishna Bairavi | Carnegie Mellon University |
Galeotti, John | Carnegie Mellon University |
Keywords: Retinal imaging, Machine learning, Image segmentation
Abstract: Segmentation of retinal vessels is important for determining various disease conditions, but deep learning approaches have been limited by the unavailability of large, publicly available, and annotated datasets. The paper addresses this problem and analyses the performance of U-Net architecture on DRIVE and RIM-ONE datasets. A different approach for data aug- mentation using vignetting masks is presented to create more annotated fundus data. Unlike most prior efforts that attempt transforming poor images to match the images in a training set, our approach takes better quality images (which have good expert labels) and transforms them to resemble poor quality target images. We apply substantial vignetting masks to the DRIVE dataset and then train a U-net on the result- ing lower quality images (using the corresponding expert la- bel data). We quantitatively show that our approach leads to better generalized networks, and we show qualitative perfor- mance improvements in RIM-ONE images (which lack expert labels).
|
|
16:00-17:30, Paper MoPbPo-08.4 | Add to My Program |
CTF-Net: Retinal Vessel Segmentation Via Deep Coarse-To-Fine Supervision Network |
|
Wang, Kun | Chongqing University |
Zhang, Xiaohong | Chongqing University |
Huang, Sheng | Chongqing University |
Wang, Qiuli | Chongqing Univeristy |
Chen, Feiyu | Chongqing University |
Keywords: Machine learning, Retinal imaging, Vessels
Abstract: Retinal blood vessel structure plays an important role in the early diagnosis of diabetic retinopathy, which is a cause of blindness globally. However, the precise segmentation of retinal vessels is often extremely challenging due to the low contrast and noise of the capillaries. In this paper, we propose a novel model of deep coarse-to-fine supervision network (CTF-Net) to solve this problem. This model consists of two U-shaped architecture(coarse and fine segNet). The coarse segNet, which learns to predict probability retina map from input patchs, while the fine segNet refines the predicted map. To gain more paths to preserve the multi-scale and rich deep features information, we design an end-to-end training network instead of multi-stage learning framework to segment the retina vessel from coarse to fine. Furthermore, in order to improve feature representation and reduce the number of parameters of model, we introduce a novel feature augmentation module (FAM-residual block). Experiment results confirm that our method achieves the state-of-the-art performances on the popular datasets DRIVE, CHASE_DB1 and STARE.
|
|
16:00-17:30, Paper MoPbPo-08.5 | Add to My Program |
Lesion-Aware Segmentation Network for Atrophy and Detachment of Pathological Myopia on Fundus Images |
|
Guo, Yan | PingAn Technology (Shenzhen) Co., Ltd., Shenzhen, China |
Wang, Rui | PingAn Technology (Shenzhen) Co., Ltd |
Zhou, Xia | Ping an Technology (Shenzhen) Co. Ltd., Shenzhen, China |
Liu, Yang | PingAn Technology (Shenzhen) Co., Ltd |
Wang, Lilong | PingAn Technology |
Lv, Chuanfeng | PingAn Tech |
Lv, Bin | China Academy of Telecommunication Research of Ministry of Indus |
Xie, Guotong | PingAn Tech |
Keywords: Eye, Retinal imaging, Image segmentation
Abstract: Pathological myopia, potentially causing loss of vision, has been considered as one of the common issues that threatens visual health of human throughout the world. Identification of retinal lesions including atrophy and detachment is meaningful because it provides ophthalmologists with quantified reference for accurate diagnosis and treatment. However, segmentation of lesions on fundus photography is still challenging because of widespread data and complexity of lesion shape. Fundus images may vary from each other distinctly as they are taken by different devices with different surroundings. False positive predictions are also inevitable on negative samples. In this paper, we propose originally invented lesion-aware segmentation network to segment atrophy and detachment on fundus images. Based on the existing architecture of paired encoder and decoder, we introduce three innovations. Firstly, our proposed network is aware of existence of lesion by including an extra classification branch. Secondly, feature fusion module is integrated to the decoder that makes the output node sufficiently absorbs the features in various scales. Last, the network is trained with an elegant objective function, called edge overlap rate, that eventually boosts the model’s sensitivity of lesion edges. The proposed network wins PALM challenge in ISBI 2019 with a large margin, that could be seen as evidence of effectiveness. Our team, PingAn Smart Health, leads the leaderboards in all metrics in scope of lesion segmentation. Permission of using the dataset outside PALM challenge was issued by the sponsor.
|
|
16:00-17:30, Paper MoPbPo-08.6 | Add to My Program |
Retinal Vessel Segmentation by Probing Adaptive to Lighting Variations |
|
Noyel, Guillaume | University of Strathclyde |
Vartin, Christine | Hospices Civils De Lyon |
Boyle, Peter | University of Strathclyde |
Kodjikian, Laurent | Croix-Rousse University Hospital, Hospices Civils De Lyon |
Keywords: Image segmentation, Retinal imaging, Eye
Abstract: We introduce a novel method to extract the vessels in eye fundus images which is adaptive to lighting variations. In the Logarithmic Image Processing framework, a 3-segment probe detects the vessels by probing the topographic surface of an image from below. A map of contrasts between the probe and the image allows to detect the vessels by a threshold. In a lowly contrasted image, results show that our method better extract the vessels than another state-of the-art method. In a highly contrasted image database (DRIVE) with a reference, ours has an accuracy of 0.9454 which is similar or better than three state-of-the-art methods and below three others. The three best methods have a higher accuracy than a manual segmentation by another expert. Importantly, our method automatically adapts to the lighting conditions of the image acquisition.
|
|
16:00-17:30, Paper MoPbPo-08.7 | Add to My Program |
Dense Correlation Network for Automated Multi-Label Ocular Disease Detection with Paired Color Fundus Photographs |
|
Li, Cheng | Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzh |
Ye, Jin | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
He, Junjun | Shanghai Jiao Tong University |
Wang, Shanshan | Shenzhen Institutes of Advanced Technology |
Qiao, Yu | Guangdong Key Lab of Computer Vision & Virtual Reality, Shenzhen |
Gu, Lixu | Shanghai Jiaotong University |
Keywords: Retinal imaging, Eye, Computer-aided detection and diagnosis (CAD)
Abstract: In ophthalmology, color fundus photography is an economic and effective tool for early-stage ocular disease screening. Since the left and right eyes are highly correlated, we utilize paired color fundus photographs for our task of automated multi-label ocular disease detection. We propose a Dense Correlation Network (DCNet) to exploit the dense spatial correlations between the paired CFPs. Specifically, DCNet is composed of a backbone Convolutional Neural Network (CNN), a Spatial Correlation Module (SCM), and a classifier. The SCM can capture the dense correlations between the features extracted from the paired CFPs in a pixel-wise manner, and fuse the relevant feature representations. Experiments on a public dataset show that our proposed DCNet can achieve better performance compared to the respective baselines regardless of the backbone CNN architectures.
|
|
16:00-17:30, Paper MoPbPo-08.8 | Add to My Program |
A Data-Aware Deep Supervised Method for Retinal Vessel Segmentation |
|
Mishra, Suraj | University of Notre Dame |
Chen, Danny Z. | University of Notre Dame |
Hu, X. Sharon | University of Notre Dame |
Keywords: Image segmentation, Retinal imaging, Machine learning
Abstract: Accurate vessel segmentation in retinal images is vital for retinopathy diagnosis and analysis. However, existence of very thin vessels in low image contrast along with pathological conditions (e.g., capillary dilation or microaneurysms) render the segmentation task difficult. In this work, we present a novel approach for retinal vessel segmentation focusing on improving thin vessel segmentation. We develop a deep convolutional neural network (CNN), which exploits the specific characteristics of the input retinal data to use deep supervision, for improved segmentation accuracy. In particular, we use the average input retinal vessel width and match it with the layer-wise effective receptive fields (LERF) of the CNN to determine the location of the auxiliary supervision. This helps the network to pay more attention to thin vessels, that otherwise the network would 'ignore' during training. We verify our method on three public retinal vessel segmentation datasets (DRIVE, CHASE_DB1, and STARE), achieving better sensitivity (10.18% average increase) than state-of-the-art methods while maintaining comparable specificity, accuracy, and AUC.
|
|
16:00-17:30, Paper MoPbPo-08.9 | Add to My Program |
Classification of Ocular Diseases Employing Attention-Based Unilateral and Bilateral Feature Weighting and Fusion |
|
He, Junjun | Shanghai Jiao Tong University |
Li, Cheng | Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzh |
Ye, Jin | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Wang, Shanshan | Shenzhen Institutes of Advanced Technology |
Qiao, Yu | Guangdong Key Lab of Computer Vision & Virtual Reality, Shenzhen |
Gu, Lixu | Shanghai Jiaotong University |
Keywords: Retinal imaging, Eye, Computer-aided detection and diagnosis (CAD)
Abstract: Early diagnosis of ocular diseases is key to prevent severe vision damage and other healthcare-related issues. Color fundus photography is a commonly utilized screening tool. However, due to the small symptoms present for early-stage ocular diseases, it is difficult to accurately diagnose the fundus photographs. To this end, we propose an attention-based unilateral and bilateral feature weighting and fusion network (AUBNet) to automatically classify patients into the corresponding disease categories. Specifically, AUBNet is composed of a feature extraction module (FEM), a feature fusion module (FFM), and a classification module (CFM). The FEM extracts two feature vectors from the bilateral fundus photographs of a patient independently. With the FFM, two levels of feature weighting and fusion are proceeded to prepare the feature representations of bilateral eyes. Finally, multi-label classifications are conducted by the CFM. Our model achieves competitive results on a real-life large-scale dataset.
|
|
16:00-17:30, Paper MoPbPo-08.10 | Add to My Program |
Automatic Classification of Artery/Vein from Single Wavelength Fundus Images |
|
Raj, Kevin | Indian Institute of Science, Bangalore, India |
M, Aniketh | USC Viterbi, California, USA |
Harish Kumar, J. R. | Indian Institute of Science and Manipal Institute of Technology |
Seelamantula, Chandra Sekhar | Indian Institute of Science, Bangalore |
Keywords: Classification, Retinal imaging, Eye
Abstract: Vessels are regions of prominent interest in retinal fundus images. Classification of vessels into arteries and veins can be used to assess the oxygen saturation level, which is one of the indicators for the risk of stroke, condition of diabetic retinopathy, and hypertension. In practice, dual-wavelength images are obtained to emphasize arteries and veins separately. In this paper, we propose an automated technique for the classification of arteries and veins from single-wavelength fundus images using convolutional neural networks employing the ResNet-50 backbone and squeeze-excite blocks. We formulate the artery-vein identification problem as a three-class classification problem where each pixel is labeled as belonging to an artery, vein, or the background. The proposed method is trained on publicly available fundus image datasets, namely RITE, LES-AV, IOSTAR, and cross-validated on the HRF dataset. The standard performance metrics, such as average sensitivity, specificity, accuracy, and area under the curve for the datasets mentioned above, are 92.8%, 93.4%, 93.4%, and 97.5%, respectively, which are superior to the state-of-the-art methods.
|
|
MoPbPo-09 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Histopathology II |
|
|
Chair: Descombes, Xavier | INRIA |
|
16:00-17:30, Paper MoPbPo-09.1 | Add to My Program |
Compact Representation Learning Using Class Specific Convolution Coders - Application to Medical Image Classification |
|
Upadhyay, Uddeshya | Indian Institute of Technology (IIT) Bombay |
Banerjee, Biplab | Indian Institute of Technology - Bombay |
Keywords: Histopathology imaging (e.g. whole slide imaging), Pattern recognition and classification, Machine learning
Abstract: Medical image classification using deep learning techniques rely on highly curated datasets, which are difficult and expensive to obtain in real world due significant expertise required to annotate the dataset. We propose a novel framework called Class Specific Convolutional Coders (CSCC) to tackle the problem of learning highly discriminative, compact and non-redundant feature space from a relatively small amount of labelled images. We design separate attention-driven convolution network based feature extractors for the categories. These feature learning modules are further intuitively combined so as to make the whole image recognition system end-to-end trainable. Results on different medical image classification tasks show the advantages of our contributions, where our proposed methods outperforms the benchmark supervised deep convolutional networks (CNNs) trained from scratch.
|
|
16:00-17:30, Paper MoPbPo-09.2 | Add to My Program |
An Effective Deep Learning Architecture Combination for Tissue Microarray Spots Classification of H&E Stained Colorectal Images |
|
Nguyen, Huu-Giao | Institute of Pathology, University of Bern |
Blank, Annika | Institute of Pathology, University of Bern |
Lugli, Alessandro | Institute of Pathology, University of Bern |
Zlobec, Inti | Institute of Pathology, University of Bern |
Keywords: Histopathology imaging (e.g. whole slide imaging), Classification, Gastrointestinal tract
Abstract: Tissue microarray (TMA) assessment of histomorphological biomarkers contributes to more accurate prediction of outcome of patients with colorectal cancer (CRC), a common disease. Unfortunately, a typical problem with the use of TMAs is that the material contained in each TMA spot changes as the TMA block is cut repeatedly. A re-classification of the content within each spot would be necessary for accurate biomarker evaluation. The major challenges to this end however lie in the high heterogeneity of TMA quality and of tissue characterization in structure, size, appearance and tissue type. In this work, we propose an end-to-end framework using deep learning for TMA spot classification into three classes: tumor, normal epithelium and other tissue types. It includes a detection of TMA spots in an image, an extraction of overlapping tiles from each TMA spot image and a classification integrated into two effective deep learning architectures: convolutional neural network (CNN) and Capsule Network with the prior information of intended tissue type. A set of digitized H&E stained images from 410 CRC patients with clinicopathological information is used for the validation of the proposed method. We show experimentally that our approach brings state-of-the-art performances for several relevant CRC H&E tissue classification and that these results are promising for use in clinical practice.
|
|
16:00-17:30, Paper MoPbPo-09.3 | Add to My Program |
Unsupervised Learning of Contextual Information in Multiplex Immunofluorescence Tissue Cytometry |
|
Jiménez-Sánchez, Daniel | Center for Applied Medical Research |
Ariz, Mikel | IDISNA, Ciberonc and Solid Tumors and Biomarkers Program, Center |
Ortiz-de-Solorzano, Carlos | Centre for Applied Medical Research |
Keywords: Histopathology imaging (e.g. whole slide imaging), Single cell & molecule detection, Machine learning
Abstract: New machine learning models designed to capture the histopathology of tissues should account not only for the phenotype and morphology of the cells, but also learn complex spatial relationships between them. To achieve this, we represent the tissue as an interconnected graph, where previously segmented cells become nodes of the graph. Then the relationships between cells are learned and embedded into a low-dimensional vector, using a Graph Neural Network. We name this Representation Learning based strategy NARO (NAtural Representation of biological Objects), a fully-unsupervised method that learns how to optimally encode cell phenotypes, morphologies, and cell-to-cell interactions from histological tissues labeled using multiplex immunohistochemistry. To validate NARO, we first use synthetically generated tissues to show that NARO’s generated embeddings can be used to cluster cells in meaningful, distinct anatomical regions without prior knowledge of constituent cell types and interactions. Then we test NARO on real multispectral images of human lung adenocarcinoma tissue samples, to show that the generated embeddings can indeed be used to automatically infer regions with different histopathological characteristics.
|
|
16:00-17:30, Paper MoPbPo-09.4 | Add to My Program |
Weakly-Supervised Balanced Attention Network for Gastric Pathology Image Localization and Classification |
|
Zhu, Zhonghang | Xiamen University |
Ding, Xin | Zhongshan Hospital Xiamen University |
Wang, Liansheng | Xiamen University |
Zhang, Defu | Xiamen University |
Keywords: Histopathology imaging (e.g. whole slide imaging), Gastrointestinal tract, Classification
Abstract: Gastric cancer pathological images classification and localization are critical in early diagnosis and therapy of related diseases. Clinically, it takes a long time to scan a pathological image due to its high resolution and blurry boundaries, which leads to requirements for automatic cancer region localization over the pathological image. In this paper, a weakly supervised model is proposed to classify and localize the gastric cancer region in the pathological image with image-level labels. We propose a channel-wise attention (CA) and spatial-wise attention (SA) module to balance the feature (feature balanced module, FBM) and coalesce the dropout attention mechanism (dropout attention module, DAM) into our model to enhance the feature significance. Based on the classification model, we extract the optimal feature map to generate the localization bounding box with a cross attention module. Experiments on a sufficient gastric dataset indicate that our method outperforms other algorithms in classification accuracy and localization accuracy, which demonstrates the effectiveness of our method.
|
|
16:00-17:30, Paper MoPbPo-09.5 | Add to My Program |
Adversarial-Based Domain Adaptation Networks for Unsupervised Tumour Detection in Histopathology |
|
Figueira, Gonçalo | Queen Mary University of London |
Wang, Yaqi | Hangzhou Dianzi University |
Sun, Lingling | Hangzhou Dianzi University |
Zhou, Huiyu | University of Leicester |
Zhang, Qianni | Queen Mary University of London |
Keywords: Histopathology imaging (e.g. whole slide imaging), Pattern recognition and classification, Machine learning
Abstract: Developing effective deep learning models for histopathology applications is challenging, as the performance depends on large amounts of labelled training data, which is often unavailable. In this work, we address this issue by leveraging previously annotated histopathology images from unrelated source domains to build a model for the unlabelled target domain. Specifically, we propose the adversarial-based domain adaptation networks (ABDA-Net) for performing the tumour detection task in histopathology in a purely unsupervised manner. This methodology successfully promoted the alignment of the source and target feature distributions among independent datasets of three tumour types - Breast, Lung and Colon - to achieve an improvement of at least 17.51% in accuracy and 18.22% in area under the curve (AUC) when compared to a classifier trained on the source data only.
|
|
16:00-17:30, Paper MoPbPo-09.6 | Add to My Program |
Microsatellite Instability Prediction of Uterine Corpus Endometrial Carcinoma Based on H&E Histology Whole-Slide Imaging |
|
Wang, Tongxin | Indiana University Bloomington |
Lu, Weijia | Tencent |
Yang, Fan | Tencent AI Lab |
Liu, Li | Nanfang Hospital, Southern Medical University |
Dong, Zhong-Yi | Nanfang Hospital, Southern Medical University |
Tang, Weimin | Tencent Healthcare |
Chang, Jia | Tencent |
Huan, Wenjing | Tencent Healthcare |
Huang, Kun | Indiana University School of Medicine |
Yao, Jianhua | National Institutes of Health |
Keywords: Histopathology imaging (e.g. whole slide imaging), Machine learning
Abstract: Microsatellite instability is an important clinical marker for various types of cancers and is related to patients' prognosis and response to immunotherapy. Currently, identifying microsatellite status relies on genetic tests, which are not widely accessible for every patient. We propose a novel pipeline to predict MSI directly from histology slides which represent the gold standard for cancer diagnosis and are ubiquitously available for cancer patients. Our method outperformed existing method on the uterine corpus endometrial carcinoma cohort in The Cancer Genome Atlas (AUC 0.73 vs. 0.56).
|
|
16:00-17:30, Paper MoPbPo-09.7 | Add to My Program |
Region of Interest Identification for Cervical Cancer Images |
|
Gupta, Manish | Microsoft |
Das, Chetna | Microsoft |
Roy, Arnab | SRL Diagnostics |
Gupta, Prashant | Microsoft |
Pillai, G. Radhakrishna | SRL Diagnostics |
Patole, Kamlakar | SRL Diagnostics |
Keywords: Machine learning, Histopathology imaging (e.g. whole slide imaging), Classification
Abstract: Every two minutes one woman dies of cervical cancer globally, due to lack of sufficient screening. Given a whole slide image (WSI) obtained by scanning a microscope glass slide for a Liquid Based Cytology (LBC) based Pap test, our goal is to assist the pathologist to determine presence of pre-cancerous or cancerous cervical anomalies. Inter-annotator variation, large image sizes, data imbalance, stain variations, and lack of good annotation tools make this problem challenging. Existing related work has focused on sub-problems like cell segmentation and cervical cell classification but does not provide a practically feasible holistic solution. We propose a practical system architecture which is based on displaying regions of interest on WSIs containing potential anomaly for review by pathologists to increase productivity. We build multiple deep learning classifiers as part of the proposed architecture. Our experiments with a dataset of ~19000 regions of interest provides an accuracy of ~89% for a balanced dataset both for binary as well as 6-class classification settings. Our deployed system provides a top-5 accuracy of ~94%.
|
|
MoPbPo-10 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Abstract Posters: CT Imaging |
|
|
|
16:00-17:30, Paper MoPbPo-10.1 | Add to My Program |
Improving Cyclic Consistency in 4D CBCT Motion Field Estimation by Self-Contained Deep Learning-Based Boosting of CBCT Reconstruction |
|
Madesta, Frederic | University Medical Center Hamburg-Eppendorf |
Sentker, Thilo | Dept. of Computational Neuroscience, University Medical Center H |
Gauer, Tobias | Dept. of Radiotherapy and Radiation Oncology, University Medical |
Werner, René | University Medical Center Hamburg-Eppendorf |
Keywords: Computed tomography (CT), Image enhancement/restoration(noise and artifact reduction), Image registration
Abstract: In modern radiotherapy, image guidance is an integral concept to reduce patient-specific uncertainties in position and shape of tumor and organs-at-risk during treatment. Recently, the use of time-resolved cone-beam computed tomography (4D CBCT) has been suggested to further improve accuracy of dose delivery to moving tumors. However, 4D CBCT incorporation is hampered by CBCT image artifacts, resulting in, e.g., unreliable motion field estimation by deformable image registration (DIR). In this work, we utilize a deep learning-based boosting model to enhance 4D CBCT image quality and to improve DIR-based motion field estimated in the image data. In particular, no prior information is used within the whole boosting process. Based on phantom measurements as well as clinical patient data, we show that artifact manifestation is greatly suppressed for state-of-the-art reconstruction algorithms and that subsequent DIR yields motion fields featuring highly increased breathing cycle consistency and accuracy. The boosting method is applicable to arbitrary 4D CBCT reconstruction algorithms, promising improvement of cyclic consistency and accuracy of 4D CBCT DIR for a wide range of 4D CBCT reconstruction techniques and registration approaches.
|
|
16:00-17:30, Paper MoPbPo-10.2 | Add to My Program |
Center of Rotation Correction for in Vivo Computed Imaging System with Limited Projections |
|
Zhou, Huanyi | Auburn University |
Keywords: Computed tomography (CT), Image reconstruction - analytical & iterative methods, Computational Imaging
Abstract: In an in vivo imaging system, image reconstruction quality is strongly related to the correct center of rotation COR). Geometry centering offset will cause serious reconstruction artifacts such as ring artifacts that could affect diagnosis. Well-known COR correction techniques including image registration, center of mass calculation, or reconstruction quality evaluation work well under certain conditions. In this paper, we propose a new measurement based on total variation and Otsu thresholding to find the correct COR location in a real-world in vivo imaging system with limited projections.
|
|
16:00-17:30, Paper MoPbPo-10.4 | Add to My Program |
Deep Learning Method for Intracranial Hemorrhage Detection and Subtype Differentiation |
|
Wu, Yunan | Northwestern University |
Deng, Jie | Northwestern University |
Keywords: Brain, Computer-aided detection and diagnosis (CAD), Computed tomography (CT)
Abstract: Early and accurate diagnosis of Intracranial Hemorrhage (ICH) has a great clinical significance for timely treatment. In this study, we proposed a deep learning method for automatic ICH diagnosis. We exploited three windowing levels to enhance different tissue contrasts to be used for feature extraction. Our convolutional neural network (CNN) model employed the EfficientNet-B2 architecture and was re-trained using a published annotated computer tomography (CT) image dataset of ICH. The performance of our model has the overall accuracy of 0.973 and precision of 0.965. The processing time is less than 0.5 second per image slice.
|
|
16:00-17:30, Paper MoPbPo-10.5 | Add to My Program |
Improving the Precision of CT-Derived Lung Function Surrogates |
|
Flakus, Mattison | University of Wisconsin-Madison |
Wallat, Eric | University of Wisconsin-Madison |
Wuschner, Antonia | University of Wisconsin Madison |
Shao, Wei | Stanford University, CA, USA |
Reinhardt, Joseph M. | The University of Iowa |
Christensen, Gary E. | The University of Iowa |
Bayouth, John | University of Wisconsin-Madison |
Keywords: Computed tomography (CT), Lung, Elasticity measures
Abstract: Purpose:Reduce variance in lung function surrogates derived from four-dimensional-computed-tomography (4DCT). Methods:Fifty-two lung cancer patients received consecutive 4DCTs prior to radiation therapy (pre-RT). Additionally, 5 mechanically ventilated swine received 4DCTs pre-RT. Local expansion ratio (LER) was computed as a surrogate for ventilation using multiple breathing phases (LERN) compared to a single inhale-and-exhale phases (LER2). Tidal volume differences between scans were <100cc. Results:For human subjects, LER measurements were more similar (within 6%) when using LERN (66% to 72% of lung voxels, p = 0.004) but not in swine (84% to 85%, p = 0.60) compared to LER2. 15.4% of human lung voxels were out-of-phase, corrected by LERN. Reducing tidal volume differences <100cc in consecutive scans increased similarity (75% to 84%) in swine; no correlation with magnitude of tidal volume difference was observed. Conclusions:Accounting for out-of-phase ventilation and tidal volume differences improved the precision of LER. Additional sources of uncertainty will be presented.
|
|
16:00-17:30, Paper MoPbPo-10.6 | Add to My Program |
Using Multiple Phases of 4DCT to Predict Ventilation Change |
|
Wallat, Eric | University of Wisconsin-Madison |
Flakus, Mattison | University of Wisconsin-Madison |
Wuschner, Antonia | University of Wisconsin Madison |
Shao, Wei | Stanford University, CA, USA |
Reinhardt, Joseph M. | The University of Iowa |
Christensen, Gary E. | The University of Iowa |
Bayouth, John | University of Wisconsin-Madison |
Keywords: Modeling - Anatomical, physiological and pathological, Lung, Computed tomography (CT)
Abstract: Purpose: To compare two models that predict lung ventilation change following radiation therapy (RT). Methods: Ventilation maps were created for 42 subjects from an IRB-approved trial by calculating the local expansion ratio (LER) for each voxel from 4DCT scans. LER2 uses only the end-inhale and end-exhale phases of the breathing cycle and LERN uses multiple (N) phases. Polynomial regression models were trained using the pre-RT ventilation maps and dose distributions for each subject. Results: For voxels that received 20 Gy or more, there was a significant increase in the accuracy of the post-RT predicted ventilation maps from 68% to 75% (p=0.03) when using LERN ventilation maps compared to the model created using LER2. Conclusion: The use of LERN in a polynomial regression model was significantly more accurate than a model created using LER2 to predict post-RT ventilation maps.
|
|
16:00-17:30, Paper MoPbPo-10.7 | Add to My Program |
A Benchmark for Deep Learning Reconstruction Methods for Low-Dose Computed Tomography |
|
Schmidt, Maximilian | University of Bremen |
Leuschner, Johannes | University of Bremen |
Otero Baguer, Daniel | University of Bremen |
Maass, Peter | Center for Industrial Mathematics, University of Bremen |
Keywords: Machine learning, Image reconstruction - analytical & iterative methods, Computed tomography (CT)
Abstract: Over the last years, deep learning methods have significantly pushed the state-of-the-art results in applications like imaging, speech recognition and time series forecasting. This development also starts to apply to the field of computed tomography (CT). One of the main goals lies in the reduction of the potentially harmful radiation dose a patient is exposed to during the scan. Depending on the reduction strategy, such low-dose measurements can be more noisy or starkly under-sampled. Hence, achieving high quality reconstructions with classical methods can be challenging. Recently, a number of deep learning approaches were introduced for this task. Up to now, most of them have only been tested on datasets with a handful of patients and different setups, which makes them hard to compare. We introduced a comprehensive low photon count CT dataset, called LoDoPaB-CT, with over 40000 two-dimensional scan slices from more than 800 patients. We conduct an extensive study based on this dataset. Popular deep learning approaches from various categories, like post-processing, learned iterative schemes and fully learned inversion are included and compared against classical methods. The test covers image quality of the reconstructions, but also the aspect of the influence of the number of training samples. The latter is of interest to biomedical applications in general, since in many of them extensive datasets are currently not available. A novel variation to the Deep Image Prior (DIP) is investigated as well. The standard DIP is an iterative method that does not use any training data. The reconstruction process can take a long time compared to other methods. We propose a shared network architecture and ways to include training samples to simultaneously increase reconstruction quality and reduce the number of iterations. Our general results show that deep learning methods combining physical modeling and learning from data are able to significantly outperform classical approaches, even for a small number of training samples. This finding supports the current research of efficiently applying deep learning methods to three- or even four-dimensional CT data. This would allow for a new generation of CT machines. We encourage other researchers from the biomedical imaging community to develop and test their CT reconstruction methods on the LoDoPaB-CT dataset.
|
|
MoPbPo-11 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Abstract Posters: Software and Databases |
|
|
|
16:00-17:30, Paper MoPbPo-11.2 | Add to My Program |
A Clinical Workflow Simulator for Intelligent Chest X-Ray Worklist Prioritization |
|
Baltruschat, Ivo Matteo | University Medical Center Hamburg-Eppendorf |
Steinmeister, Leonhard | University Medical Center Hamburg-Eppendorf |
Ittrich, Harald | University Medical Center Hamburg-Eppendorf |
Adam, Gerhard | University Medical Center Hamburg-Eppendorf |
Nickisch, Hannes | Philips Research, Hamburg, Germany |
Saalbach, Axel | Philips GmbH, Innovative Technologies |
Grass, Michael | Philips Research, Hamburg |
Knopp, Tobias | University Medical Center Hamburg-Eppendorf |
Keywords: X-ray imaging, Lung, Computer-aided detection and diagnosis (CAD)
Abstract: Growing radiologic workload and shortage of medical experts
worldwide often lead to delayed or even unreported
examinations, which poses a risk for patient’s safety in
case of unrecognized findings in chest radiographs (CXR).
The aim of this work was to evaluate, whether deep learning
algorithms for an intelligent worklist prioritization can
optimize the radiology workflow by reducing the report
turnaround times (RTAT) for critical findings, instead of
reporting according to the First-In-First-Out-Principle
(FIFO). Furthermore, we investigated the problem of false
negative prediction in the context of worklist
prioritization.
We developed a simulation framework by analyzing the
current workflow at a university hospital. The framework
can be used to simulate clinical workdays. To assess the
potential benefit of an intelligent worklist
prioritization, three different workflow simulations were
run and RTAT were compared: FIFO (non-prioritized), Prio1
and Prio2 (prioritized based on urgency, without/with
MAXwait). For Prio2, the highest urgency will be assigned
after a maximum waiting time. Examination triage was
performed by "ChestXCheck", a convolutional neural network,
classifying eight different pathological findings ranked in
descending order of urgency: pneumothorax, pleural
effusion, infiltrate, congestion, atelectasis,
cardiomegaly, mass and foreign object. For statistical
analysis of the RTAT changes, we used the Welch’s t-test.
The average RTAT for all critical findings was
significantly reduced by both Prio simulations compared to
the FIFO simulation (e.g. pneumothorax: 32.1 min vs. 69.7
min; p < 0.0001), while the average RTAT for normal
examinations increased at the same time (90.0 min vs. 69.5
min; p < 0.0001). Both effects were slightly lower at Prio2
than at Prio1, whereas the maximum RTAT at Prio1 was
substantially higher for all classes (e.g. pneumothorax:
895 min vs. 694 min), due to individual examinations rated
false negative.
Our simulations demonstrated that intelligent worklist
prioritization by deep learning algorithms reduce
significantly the average RTAT for critical findings in
chest X-ray while also maximum RTAT.
|
|
16:00-17:30, Paper MoPbPo-11.3 | Add to My Program |
Current-Based Forward Solver for Electrical Impedance Tomography |
|
Ma, Erfang | Xi'an Jiaotong-Liverpool University |
Keywords: Electrical impedance tomography, Inverse methods
Abstract: We present a new forward solver for the shunt model of 3D
electrical impedance tomography (EIT). The new solver is
based on a direct discretization of the conditions for the
current density within the EIT region. Given a mesh over
the region, the new solver finds firstly the amount of
current flowing through each face of every element in the
mesh, then the distribution of current density and finally
the potential distribution. Results of simulation show that
the new solver could give similar results as the
traditional finite element method.
|
|
16:00-17:30, Paper MoPbPo-11.4 | Add to My Program |
OpenHI - Histopathological Image Platform for Digital Pathology |
|
Puttapirat, Pargorn | Xi'an Jiaotong University |
Zhang, Haichuan | Xi'an Jiaotong University |
Li, Chen | Xi'an Jiaotong University |
Keywords: Histopathology imaging (e.g. whole slide imaging), Visualization, Image archiving
Abstract: In the past decade, the popularization of whole-slide
images (WSIs) have enabled the modern technologies such as
artificial intelligence for the automated tasks in
pathology, such as detection, counting, and morphological
analyses. The key developments in digital pathology include
the introduction of DICOM supplement 145 for WSI
formatting, several challenges and open source software
that support WSIs and the US FDA recent approval for
whole-slide imaging system in clinical settings. The
previous software solutions of manipulating WSIs only work
either for reading, annotating, or processing on a single
machine. OpenHI – Open Histopathological Image is an open
sourced web-based platform written in Python to support
viewing, storing, and analyzing WSIs. We believe that the
platform would be a crucial part of transformation from
conventional to digital pathology similar to adoption of
digital systems in radiology. The proposed platform is developed based on the established
Python libraries, MySQL database and a web micro-framework,
Flask, for developing a web-based platform for viewing,
annotating, storing, and analyzing WSIs with multi-user
support. To view multi-scale image such as WSI,
OpenSeadragon is used so that users can interact with WSIs
via computers and mobile devices. While users can zoom and
pan freely, a resolving power-based virtual magnification
proposed with OpenHI is also utilized in the platform to
simulate optical magnification. Pathologists can use such
function for magnification-dependent histologic grading
system. Efficient library and file format designed for
multi-scale images (gitlab.com/BioAI/libMI) were used for
reading and writing the files. The graphic user interface
of OpenHI is shown in Figure 1. With the server-client scheme similar to the existing
picture archiving and communication system (PACS), OpenHI
could be deployed in standard server configurations or
cloud infrastructure. OpenHI supports simultaneous
accessing and automated analysing of WSIs for pathologists
and computer scientists. Since OpenHI is open source, fully
documented and available at gitlab.com/BioAI/OpenHI, the
software could be extended and additional functionalities
could be added.
|
|
16:00-17:30, Paper MoPbPo-11.5 | Add to My Program |
OMEGA - Open-Source MATLAB Emission Tomography Software |
|
Wettenhovi, Ville-Veikko | University of Eastern Finland |
Vauhkonen, Marko | University of Kuopio Kuopio |
Kolehmainen, Ville | University of Kuopio |
Keywords: Nuclear imaging (e.g. PET, SPECT), Computational Imaging, Image reconstruction - analytical & iterative methods
Abstract: OMEGA is an open-source image reconstruction software for
positron emission tomography (PET) data for MATLAB and GNU
Octave. OMEGA has been designed to enable easy
reconstruction of fully 3D PET data, whether in the
traditional sinogram format or in raw list-mode format.
Measurement data can be from any machine, though built-in
support is available for simulated GATE data and for
Siemens Inveon list-mode data. This support allows for the
easy import of the input data into user specified sinograms
and for efficient and easy reconstruction. For the
measurement data several corrections are inherently
included in OMEGA. These include corrections for randoms,
attenuation, normalization and scatter. Randoms can be
extracted from delayed coincidence events from both GATE
and Inveon data, normalization coefficients can be computed
from any normalization measurement data. Scatter data can
be obtained from GATE simulations and later used as scatter
correction data. An open preclinical PET data measured with
Inveon PET is available and supported by OMEGA. Image reconstruction is implemented by using open and
parallel standards OpenMP and OpenCL. Both methods are
available in a matrix free fashion allowing for efficient
image reconstruction with little memory usage even with
high resolution scanners. For OpenCL there is furthermore a
possibility to use multi-device reconstruction, allowing
the use of heterogeneous computing (CPU + GPU) or multi-GPU
reconstruction. Total of 10 maximum-likelihood algorithms,
including OSEM and MLEM, are available, along with seven
different maximum a posteriori algorithms with 10 priors.
Forward and backward projections can be computed with two
different ray tracing algorithms. Furthermore, dedicated
codes are available for separate forward and back
projections and support for custom gradient-based priors,
both with OpenCL support. All source code is publicly
available on GitHub (https://github.com/villekf/OMEGA).
|
|
16:00-17:30, Paper MoPbPo-11.6 | Add to My Program |
User-Friendly Building of Reconstruction Algorithms for Solving Inverse Problems |
|
Donati, Laurène | EPFL, Biomedical Imaging Group |
Soubies, Emmanuel | CNRS |
Pham, Thanh-an | Ecole Polytechnique Fédérale De Lausanne (EPFL) |
Unser, Michael | EPFL |
Keywords: Inverse methods, Optimization method, Modeling - Image formation
Abstract: Imaging scientists nowadays commonly relies on the deployment of sophisticated algorithms to recover an object x of interest from measurements y. These quantities are linked according to a matrix H that models the imaging system. The classical approach to address this inverse problem and recover an estimated solution x* consists in solving x* = argminx D(Hx, y) + λR(x), where D(Hx, y) is a data-fidelity metric, while R(x) enforces the regularization of the solution. To this end, we recently developed GlobalBioIm, an open-source Matlab library that standardizes the resolution of a wide range of imaging problems. This toolbox gives access to cutting-edge reconstruction algorithms, and can be extended to new modalities and methods by combining elementary modules. The versatility and efficiency of GlobalBioIm have been highlighted in a series of recent high-impact works. Driven by these encouraging applications, we have devoted our efforts towards improving the usability of GlobalBioIm by those with limited expertise in inverse problems and optimization theory. The outcome is a new user-friendly Matlab interface that allows non-experts to intuitively build tailored reconstruction algorithms with minimal effort. Our hope is that this new tool will encourage the use of more robust variational reconstruction frameworks in a wider-range of imaging applications.
|
|
16:00-17:30, Paper MoPbPo-11.7 | Add to My Program |
DeepImageJ: Bridging Deep Learning to ImageJ |
|
Gomez-de-Mariscal, Estibaliz | Universidad Carlos III De Madrid |
Garcia-Lopez-de-Haro, Carlos | Universidad Carlos III De Madrid |
Donati, Laurène | EPFL, Biomedical Imaging Group |
Unser, Michael | EPFL |
Munoz-Barrutia, Arrate | Universidad Carlos III De Madrid |
Sage, Daniel | Ecole Polytechnique Federale De Lausanne (EPFL) |
Keywords: Microscopy - Light, Confocal, Fluorescence, Molecular and cellular screening, Image segmentation
Abstract: DeepImageJ is a user-friendly plugin that enables the generic used in FIJI/ImageJ of pre-trained deep learning (DL) models provided by their developers. The plugin acts as a software layer between TensorFlow and FIJI/ImageJ, runs on a standard CPU-based computer and can be used without any DL expertise. Beyond its direct use, we expect DeepImageJ to contribute to the spread and assessment of DL models in life-sciences applications and bioimage informatics.
|
|
16:00-17:30, Paper MoPbPo-11.8 | Add to My Program |
A Deep Learning Framework to Expedite Infrared Spectroscopy for Digital Histopathology |
|
Falahkheirkhah, Kianoush | University of Illinois at Urbana Champaign |
Yeh, Kevin | University of Illinois at Urbana-Champaign |
Pfister, Luke | University of Illinois |
Bhargava, Rohit | University of Illinois at Urbana-Champaign |
Keywords: Machine learning, Image segmentation, Infrared imaging
Abstract: Histopathology, based on examining the morphology of epithelial cells, is the gold standard in clinical diagnosis and research for detecting carcinomas. This is a time-consuming, error-prone, and non-quantitative process. An alternate approach, Fourier transform infrared (FTIR) spectroscopic imaging offers label-free visualization of the tissues by providing spatially-localized chemical information coupled to computational algorithms to reveal contrast between different cell types and diseases, thereby skipping the manual and laborious process of traditional histopathology. While FTIR imaging provides reliable analytical information over a wide spectral profile, data acquisition time is a major challenge in the translation to clinical research. In the acquisition of spectroscopic imaging data, there is an ever-present trade-off between the amount of data recorded and the acquisition time. Since not all the spectral elements are needed for classification, discrete frequency infrared (DFIR) imaging has been introduced to expedite the data recording by measuring required spectral elements. We report a deep learning-based framework to further accelerate the whole process of data acquisition and analysis through also subsampling in the spatial domain. First, we introduce a convolutional neural network (CNN) to leverage both spatial and spectral information for segmenting infrared data, which we term the IRSEG network. We show that this framework increases the accuracy while utilizing approximately half the number of unique bands commonly required for previous pixel-wise classification algorithms used in the DFIR community. Finally, we present a data reconstruction approach using generative adversarial network (GAN) to reconstruct the whole spatial and spectral domain while only using a small fraction of the total possible data, with minimal information loss. We name this IR GAN-based data reconstruction IRGAN. Together, this study paves the way the translation of IR imaging to clinic for label-free histological analysis by boosting the process approximately 20 times faster from hours to minutes.
|
|
|