| |
Last updated on May 5, 2020. This conference program is tentative and subject to change
Technical Program for Sunday April 5, 2020
|
SuAbO1 Oral Session, Oakdale I-II |
Add to My Program |
Registration and Synthesis |
|
|
Chair: Kybic, Jan | Czech Technical University in Prague |
Co-Chair: Warfield, Simon K. | Harvard Medical School |
|
11:00-11:15, Paper SuAbO1.1 | Add to My Program |
Coupling Principled Refinement with Bi-Directional Deep Estimation for Robust Deformable 3D Medical Image Registration |
|
Zhang, Yuxi | Dalian University of Technology |
Liu, Risheng | Dalian University of Technology |
Li, Zi | Dalian University of Technology |
Liu, Zhu | Dalian University of Technology |
Fan, Xin | Dalian University of Technology |
Luo, Zhongxuan | Dalian University of Technology |
Keywords: Image registration, Machine learning, Optimization method
Abstract: Deformable 3D medical image registration is challenging due to the complicated transformations between image pairs. Traditional approaches estimate deformation fields by optimizing a task-guided energy embedded with physical priors, achieving high accuracy while suffering from expensive computational loads for the iterative optimization. Recently, deep networks, encoding the information underlying data examples, render fast predictions but severely dependent on training data and have limited flexibility. In this study, we develop a paradigm integrating the principled prior into a bi-directional deep estimation process. Inheriting from the merits of both domain knowledge and deep representation, our approach achieves a more efficient and stable estimation of deformation fields than the state-of-the-art, especially when the testing pairs exhibit great variations with the training.
|
|
11:15-11:30, Paper SuAbO1.2 | Add to My Program |
Enhanced Image Registration with a Network Paradigm and Incorporation of a Deformation Representation Model |
|
Sang, Yudi | University of California, Los Angeles |
Ruan, Dan | University of California Los Angeles |
Keywords: Machine learning, Image registration
Abstract: In conventional registration methods, regularization functionals and balancing hyper-parameters need to be designed and tuned. Even so, heterogeneous tissue property and balance requirement remain challenging. In this study, we propose a registration network with a novel deformation representation model to achieve spatially variant conditioning on the deformation vector field (DVF). In the form of a convolutional auto-encoder, the proposed representation model is trained with a rich set of DVFs as a feasibility descriptor. Then the auto-encoding discrepancy is combined with fidelity in training the overall registration network in an unsupervised learning paradigm. The trained network generates DVF estimates from paired images with a single forward inference evaluation run. Experiments with synthetic images and 3D cardiac MRIs demonstrate that the method can accomplish registration with physically and physiologically more feasible DVFs, sub-pixel registration errors and millisecond execution time, and incorporation of the representation model improved the registration network performance significantly.
|
|
11:30-11:45, Paper SuAbO1.3 | Add to My Program |
Validating Uncertainty in Medical Image Translation |
|
Reinhold, Jacob | Johns Hopkins University |
He, Yufan | Johns Hopkins University |
Han, Shizhong | 12 Sigma Technologies |
Chen, Yunqiang | 12 Sigma Technologies |
Gao, Dashan | 12 Sigma Technologies |
Lee, Junghoon | Johns Hopkins University |
Prince, Jerry | Johns Hopkins University |
Carass, Aaron | Johns Hopkins University |
Keywords: Image synthesis, Machine learning, Magnetic resonance imaging (MRI)
Abstract: Medical images are increasingly used as input to deep neural networks to produce quantitative values that aid researchers and clinicians. However, standard deep neural networks do not provide a reliable measure of uncertainty in those quantitative values. Recent work has shown that using dropout during training and testing can provide estimates of uncertainty. In this work, we investigate using dropout to estimate epistemic and aleatoric uncertainty in a CT-to-MR image translation task. We show that both types of uncertainty are captured, as defined, providing confidence in the output uncertainty estimates.
|
|
11:45-12:00, Paper SuAbO1.4 | Add to My Program |
Volumetric Registration-Based Cleft Volume Estimation of Alveolar Cleft Grafting Procedures |
|
Zhang, Yungeng | Peking University |
Pei, Yuru | Peking University |
Chen, Si | Peking University |
Guo, Yuke | Luoyang Institute of Science and Technology |
Ma, Gengyu | USens Inc |
Xu, Tianmin | Peking University |
Zha, Hongbin | Peking University |
Keywords: Image registration, Computed tomography (CT), Bone
Abstract: This paper presents a method for automatic estimation of the bony alveolar cleft volume of cleft lips and palates (CLP) patients from cone-beam computed tomography (CBCT) images via a fully convolutional neural network. The core of this method is the partial nonrigid registration of the CLP CBCT image with the incomplete maxilla and the template with the complete maxilla. We build our model on the 3D U-Net and parameterize the nonlinear mapping from the one-channel intensity CBCT image to six-channel inverse deformation vector fields (DVF). We enforce the partial maxillary registration using an adaptive irregular mask regarding the cleft in the registration process. When given inverse DVFs, the deformed template combined with volumetric Boolean operators are used to compute the cleft volume. To avoid the rough and inaccurate reconstructed cleft surface, we introduce an additional cleft shape constraint to fine-tune the parameters of the registration neural networks. The proposed method is applied to clinically-obtained CBCT images of CLP patients. The qualitative and quantitative experiments demonstrate the effectiveness and efficiency of our method in the volume completion and the bony cleft volume estimation compared with the state-of-the-art.
|
|
12:15-12:30, Paper SuAbO1.6 | Add to My Program |
Joint Registration and Change Detection in Longitudinal Brain MRI |
|
Dufresne, Eléonore | ICube UMR 7357, Université De Strasbourg, CNRS, Strasbourg, Fran |
Fortun, Denis | CNRS, Université De Strasbourg |
Kumar, Babloo | Indian Institute of Technology (BHU) Varanasi |
Kremer, Stephane | University of Strasbourg |
Noblet, Vincent | ICube, University of Strasbourg, CNRS |
Keywords: Magnetic resonance imaging (MRI), Image registration, Image segmentation
Abstract: Automatic change detection in longitudinal brain MRI classically consists in a sequential pipeline where registration is estimated as a pre-processing step before detecting pathological changes such as lesion appearance. Deformable registration can advantageously be considered over rigid or affine transform in the presence of geometrical distortions or brain atrophy to reduce false positive detections. However, this may be at the cost of underestimating the changes of interest due to the over-compensation of the differences between baseline and follow-up studies. In this article, we propose to overcome this limitation with a framework where deformable registration and lesion change are estimated jointly. We compare this joint framework with its sequential counterpart based on either affine or deformable registration. We demonstrate the benefits for detecting multiple sclerosis lesion evolutions on both synthetic and real data.
|
|
12:30-12:45, Paper SuAbO1.7 | Add to My Program |
Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks |
|
Kan, Chi Nok Enoch | Marquette University |
Maheen Aboobacker, Najib Akram | Marquette University |
Ye, Dong Hye | Marquette University |
Keywords: Computed tomography (CT), Image synthesis, Machine learning
Abstract: Deep learning is a popular and powerful tool in computed tomography (CT) image processing such as organ segmentation, but its requirement of large training datasets remains a challenge. Even though there is a large anatomical variability for children during their growth, the training datasets for pediatric CT scans are especially hard to obtain due to risks of radiation to children. In this paper, we propose a method to conditionally synthesize realistic pediatric CT images using a new auxiliary classifier generative adversarial networks (ACGANs) architecture by taking account into age information. The proposed network generated age-conditioned high-resolution CT images to enrich pediatric training datasets.
|
|
SuAbO2 Oral Session, Oakdale III |
Add to My Program |
Brain Segmentation and Characterization |
|
|
Chair: Chung, Moo K. | University of Wisconsin-Madison |
|
11:00-11:15, Paper SuAbO2.1 | Add to My Program |
Building an Ex Vivo Atlas of the Earliest Brain Regions Affected by Alzheimer's Disease Pathology |
|
Ravikumar, Sadhana | Penn Image Computing and Science Laboratory, Department of Radio |
Wisse, Laura | Penn Image Computing and Science Laboratory, Department of Radio |
Ittyerah, Ranjit | Penn Image Computing and Science Laboratory, Department of Radio |
Lim, Sydney | University of Pennsylvania |
Lavery, Madigan | University of Pennsylvania |
Xie, Long | Penn Image Computing and Science Laboratory (PICSL), Department |
Robinson, John | Center for Neurodegenerative Disease Research (CNDR), University |
Schuck, Theresa | Center for Neurodegenerative Disease Research (CNDR), University |
Grossman, Murray | Department of Neurology, University of Pennsylvania |
Lee, Edward B | University of Pennsylvania |
Tisdall, M. Dylan | University of Pennsylvania |
Prabhakaran, Karthik | University of Pennsylvania |
Detre, John A. | University of Pennsylvania |
Das, Sandhitsu | Department of Neurology, University of Pennsylvania |
Mizsei, Gabor | University of Pennsylvania |
Artacho Pérula, Emilio | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Iñiguez de Onzoño Martin, María Mercedes | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Arroyo Jiménez, María del Mar | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Muñoz López, Mónica | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Molina Romero, Francisco Javier | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Marcos Rabal, María Pilar | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Irwin, David J | University of Pennsylvania |
Trojanowski, John | Center for Neurodegenerative Disease Research (CNDR), University |
Wolk, David | Department of Neurology, University of Pennsylvania |
Insausti, Ricardo | Human Neuroanatomy Laboratory, University of Castilla-La Mancha |
Yushkevich, Paul | University of Pennsylvania |
Keywords: Image registration, Atlases, Brain
Abstract: Earliest neuropathological changes in Alzheimer’s Disease (AD) emerge in the medial temporal lobe (MTL). In order for MRI biomarkers to detect changes linked specifically to AD pathology (as opposed to aging or other pathological factors) macroscopic patterns of structural change in the MTL must be linked to the underlying neuropathology. To provide such a linkage, we are conducting an autopsy imaging study combining ex vivo MRI and serial histopathology. Information from multiple subjects can be studied by creating a “population average” atlas of the MTL. We present a groupwise registration approach for constructing the atlas that is able to successfully capture the complex structure of the MTL, and anatomical variability across subjects. This atlas allows us to generate maps of cortical thickness measurements and identify regions in the MTL where structural changes correlate most strongly with AD progression. We show that using this atlas, we are able to find a significant correlation between atrophy and AD pathology in the MTL sub-regions associated with the earliest stages of AD pathology as described by Braak and Braak.
|
|
11:15-11:30, Paper SuAbO2.2 | Add to My Program |
Simultaneous Classification and Segmentation of Intracranial Hemorrhage Using a Fully Convolutional Neural Network |
|
Guo, Danfeng | CuraCloud Corporation |
Wei, Haihua | Shenzhen Second People's Hospital |
Zhao, Pengfei | CuraCloud Corporation |
Pan, Yue | CuraCloud Corporation |
Yang, Hao-Yu | CuraCloud Corporation |
Wang, Xin | CuraCloud Corporation |
Bai, Junjie | CuraCloud Corporation |
Cao, Kulin | CuraCloud Corporation |
Song, Qi | CuraCloud Corporation, USA |
Xia, Jun | Shenzhen Second People's Hospital |
Gao, Feng | CuraCloud Corporation |
Yin, Youbing | CuraCloud Corporation |
Keywords: Computer-aided detection and diagnosis (CAD), Brain, Computed tomography (CT)
Abstract: Intracranial hemorrhage (ICH) is a critical disease that requires immediate diagnosis and treatment. Accurate detection, subtype classification and volume quantification of ICH are critical aspects in ICH diagnosis. Previous studies have applied deep learning techniques for ICH analysis but usually tackle the aforementioned tasks in a separate manner without taking advantage of information sharing between tasks. In this paper, we propose a multi-task fully convolutional network, ICHNet, for simultaneous detection, classification and segmentation of ICH. The proposed framework utilizes the inter-slice contextual information and has the flexibility in handling various label settings and task combinations. We evaluate the performance of our proposed architecture using a total of 1176 head CT scans and show that it improves the performance of both classification and segmentation tasks compared with single-task and baseline models.
|
|
11:30-11:45, Paper SuAbO2.3 | Add to My Program |
Deep Mouse: An End-To-End Auto-Context Refinement Framework for Brain Ventricle & Body Segmentation in Embryonic Mice Ultrasound Volumes |
|
Xu, Tongda | New York University |
Qiu, Ziming | New York University |
Das, William | Hunter College High School |
Wang, Chuiyu | Beihang University |
Langerman, Jack | NYU / Independent / Bell Labs |
Nair, Nitin | New York University |
Aristizabal, Orlando | Riverside Research Institute |
Mamou, Jonathan | Riverside Research |
Turnbull, Daniel H. | New York University School of Medicine |
Ketterling, Jeffrey A. | Riverside Research Institute |
Wang, Yao | Polytechnic Institute of New York University |
Keywords: Image segmentation, Ultrasound, Small animals
Abstract: The segmentation of the brain ventricle (BV) and body in embryonic mice high-frequency ultrasound (HFU) volumes can provide useful information for biological researchers. However, manual segmentation of the BV and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context refinement framework, consisting of two stages. The first stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the refinement segmentation network. Joint training of the two stages provides significant improvement in Dice Similarity Coefficient (DSC) over using only the first stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method significantly reduces the inference time (102.36 to 0.09 s/volume ≈1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
|
|
11:45-12:00, Paper SuAbO2.4 | Add to My Program |
CNN Detection of New and Enlarging Multiple Sclerosis Lesions from Longitudinal MRI Using Subtraction Images |
|
Mohammadi Sepahvand, Nazanin | McGill |
Arnold, Douglas L. | NeuroRx Research, Montreal, Quebec, Canada |
Arbel, Tal | Centre for Intelligent Machines, McGill University |
Keywords: Machine learning, Image segmentation, Magnetic resonance imaging (MRI)
Abstract: Accurate detection and segmentation of new lesional activity in longitudinal Magnetic Resonance Images (MRIs) of patients with Multiple Sclerosis (MS) is important for monitoring disease activity, as well as for assessing treatment effects.In this work, we present the first deep learning framework to automatically detect and segment new and enlarging (NE) T2w lesions from longitudinal brain MRIs acquired from relapsing-remitting MS (RRMS) patients.The proposed framework is an adapted 3D U-Net [1] which includes as inputs the reference multi-modal MRI and T2-weighted lesion maps, as well an attention mechanism based on the subtraction MRI (between the two timepoints) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change, while constraining the search space for small lesions. Experiments on a large, proprietary, multi-center, multi-modal, clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy (detection AUC=.95), outperforming (1)a U-Net without an attention mechanism (detection AUC=.93), (2) a framework based on subtracting independent T2-weighted segmentations (detection AUC=.57), and (3) DeepMedic(detection AUC=.84), particularly for small lesions. In addition, the method was able to accurately classify patients as active/inactive with (sensitivities of .69 and specificities of .97).
|
|
12:00-12:15, Paper SuAbO2.5 | Add to My Program |
SynergyNet: A Fusion Framework for Multiple Sclerosis Brain MRI Segmentation with Local Refinement |
|
Vang, Yeeleng Scott | University of California, Irvine |
Cao, Yingxin | University of California, Irvine |
Chang, Peter | University of California, Irvine |
Chow, Daniel | University of California, Irvine |
Paul, Friedemann | Charité – Universitätsmedizin Berlin, Germany |
Scheel, Michael | Charite-Univertatsmedizin |
Brandt, Alexander | Charité – Universitätsmedizin Berlin, Germany |
Xie, Xiaohui | University of California, Irvine |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Machine learning
Abstract: The high irregularity of multiple sclerosis (MS) lesions in sizes and numbers often proves difficult for automated systems on the task of MS lesion segmentation. Current State-of-the-art MS segmentation algorithms employ either only global perspective or just patch-based local perspective segmentation approaches. Although global image segmentation can obtain good segmentation for medium to large lesions, its performance on smaller lesions lags behind. On the other hand, patch-based local segmentation disregards spatial information of the brain. In this work, we propose SynergyNet, a network segmenting MS lesions by fusing data from both global and local perspectives to improve segmentation across different lesion sizes. We achieve global segmentation by leveraging the U-Net architecture and implement the local segmentation by augmenting U-Net with the Mask R-CNN framework. The sharing of lower layers between these two branches benefits end-to-end training and proves advantages over simple ensemble of the two frameworks. We evaluated our method on two separate datasets containing 765 and 21 volumes respectively. Our proposed method can improve 2.55% and 5.0% for Dice score and lesion true positive rates respectively while reducing over 20% in false positive rates in the first dataset, and improve in average 10% and 32% for Dice score and lesion true positive rates in the second dataset. Results suggest that our framework for fusing local and global perspectives is beneficial for segmentation of lesions with heterogeneous sizes.
|
|
12:15-12:30, Paper SuAbO2.6 | Add to My Program |
Braided Networks for Scan-Aware MRI Brain Tissue Segmentation |
|
Mostapha, Mahmoud | University of North Carolina at Chapel Hill |
Mailhe, Boris | Siemens Healthineers |
Chen, Xiao | Siemens Healthineers, Digital Technology and Innovation |
Ceccaldi, Pascal | Siemens Healthineers, Digital Technology and Innovation |
Yoo, Youngjin | Siemens Healthineers, Digital Technology and Innovation |
Nadar, Mariappan | Siemens Corporation, Corporate Technology |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Brain
Abstract: Recent advances in supervised deep learning, mainly using convolutional neural networks, enabled the fast acquisition of high-quality brain tissue segmentation from structural magnetic resonance brain images (MRI). However, the robustness of such deep learning models is limited by the existing training datasets acquired with a homogeneous MRI acquisition protocol. Moreover, current models fail to utilize commonly available relevant non-imaging information (i.e., meta-data). In this paper, the notion of a braided block is introduced as a generalization of convolutional or fully connected layers for learning from paired data (meta-data, images). For robust MRI tissue segmentation, a braided 3D U-Net architecture is implemented as a combination of such braided blocks with scanner information, MRI sequence parameters, geometrical information, and task-specific prior information used as meta-data. When applied to a large (> 16,000 scans) and highly heterogeneous (wide range of MRI protocols) dataset, our method generates highly accurate segmentation results (Dice scores > 0.9) within seconds.
|
|
12:30-12:45, Paper SuAbO2.7 | Add to My Program |
7t Guided 3t Brain Tissue Segmentation Using Cascaded Nested Network |
|
Wei, Jie | Northwestern Polytechnical University |
Bui, Duc Toan | University of North Carolina at Chapel Hill |
Wu, Zhengwang | UNC-Chapel Hill |
Wang, Li | UNC-CHAPEL HILL |
Xia, Yong | Northwestern Polytechnical University |
Li, Gang | University of North Carolina at Chapel Hill |
Shen, Dinggang | UNC-Chapel Hill |
Keywords: Image segmentation, Brain, Magnetic resonance imaging (MRI)
Abstract: Accurate segmentation of the brain into major tissue types, e.g., the gray matter, white matter, and cerebrospinal fluid, in magnetic resonance (MR) imaging is critical for quantification of the brain anatomy and function. The availability of 7T MR scanners can provide more accurate and reliable voxel-wise tissue labels, which can be leveraged to supervise the training of the tissue segmentation in the conventional 3T brain images. Specifically, a deep learning based method can be used to build the highly non-linear mapping from the 3T intensity image to the more reliable label maps obtained from the 7T images of the same subject. However, the misalignment between 3T and 7T MR images due to image distortions poses a major obstacle to achieving better segmentation accuracy. To address this issue, we measure the quality of the 3T-7T alignment by using a correlation coefficient map. Then we propose a cascaded nested network (CaNes-Net) for 3T MR image segmentation and a multi-stage solution for training this model with the ground-truth tissue labels from 7T images. This paper has two main contributions. First, by incorporating the correlation loss, the above mentioned obstacle can be well addressed. Second, the geodesic distance maps are constructed based on the intermediate segmentation results to guide the training of the CaNes-Net as an iterative coarse-to-fine process. We evaluated the proposed CaNes-Net with the state-of-the-art methods on 18 in-house acquired subjects. We also qualitatively assessed the performance of the proposed model and U-Net on the ADNI dataset. Our results indicate that the proposed CaNes-Net is able to dramatically reduce mis-segmentation caused by the misalignment and achieves substantially improved accuracy over all the other methods.
|
|
SuAbO3 Oral Session, Oakdale IV-V |
Add to My Program |
Optical Microscopy and Analysis |
|
|
Chair: Wählby, Carolina | Centre for Image Analysis and Science for Life Laboratory, Uppsala University, Sweden |
Co-Chair: Lockett, Stephen | Frederick National Laboratory for Cancer Research |
|
11:15-11:30, Paper SuAbO3.2 | Add to My Program |
ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning |
|
Zhang, Mo | Peking University |
Zhao, Jie | Peking University |
Li, Xiang | Harvard Medical School, Massachusetts General Hospital |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Zhang, Li | Peking University |
Keywords: Image segmentation, Integration of multiscale information, Cells & molecules
Abstract: Extracting multi-scale information is key to semantic segmentation. However, the classic convolutional neural networks (CNNs) encounter difficulties in achieving multi-scale information extraction: expanding convolutional kernel incurs the high computational cost and using maximum pooling sacrifices image information. The recently developed dilated convolution solves these problems, but with the limitation that the dilation rates are fixed and therefore the receptive field cannot fit for all objects with different sizes in the image. We propose an adaptive-scale convolutional neural network (ASCNet), which introduces a 3-layer convolution structure in the end-to-end training, to adaptively learn an appropriate dilation rate for each pixel in the image. Such pixel-level dilation rates produce optimal receptive fields so that the information of objects with different sizes can be extracted at the corresponding scale. We compare the segmentation results using the classic CNN, the dilated CNN and the proposed ASCNet on two types of medical images (The Herlev dataset and SCD RBC dataset). The experimental results show that ASCNet achieves the highest accuracy. Moreover, the automatically generated dilation rates are positively correlated to the sizes of the objects, confirming the effectiveness of the proposed method.
|
|
11:30-11:45, Paper SuAbO3.3 | Add to My Program |
Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion |
|
Guo, Yuanhao | Chinese Academy of Sciences |
Verbeek, Fons J. | Leiden University |
Yang, Ge | Institute of Automation, Chinese Academy of Sciences |
Keywords: Image reconstruction - analytical & iterative methods, Microscopy - Light, Confocal, Fluorescence, Animal models and imaging
Abstract: Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements.
|
|
11:45-12:00, Paper SuAbO3.4 | Add to My Program |
Fully Unsupervised Probabilistic Noise2void |
|
Prakash, Mangal | MPI-CBG |
Lalit, Manan | MPI-CBG |
Tomancak, Pavel | MPI-CBG |
Krull, Alexander | MPI-CBG |
Jug, Florian | MPI-CBG |
Keywords: Machine learning, Microscopy - Light, Confocal, Fluorescence, Image enhancement/restoration(noise and artifact reduction)
Abstract: Image denoising is the first step in many biomedical image analysis pipelines and Deep Learning (DL) based methods are currently best performing. A new category of DL methods such as Noise2Void or Noise2Self can be used fully unsupervised, requiring nothing but the noisy data. However, this comes at the price of reduced reconstruction quality. The recently proposed Probabilistic Noise2Void (PN2V) improves results, but requires an additional noise model for which calibration data needs to be acquired. Here, we present improvements to PN2V that (i) replace histogram based noise models by parametric noise models, and (ii) show how suitable noise models can be created even in the absence of calibration data. This is a major step since it actually renders PN2V fully unsupervised. We demonstrate that all proposed improvements are not only academic but indeed relevant.
|
|
12:00-12:15, Paper SuAbO3.5 | Add to My Program |
Removing Structured Noise with Self-Supervised Blind-Spot Networks |
|
Broaddus, Coleman | Max Planck Institute for Molecular Cell Biology and Genetics |
Krull, Alexander | MPI-CBG |
Weigert, Martin | MPI-CBG |
Schmidt, Uwe | MPI-CBG |
Myers, Eugene | Max Planck Institute of Molecular Cell Biology and Genetics |
Keywords: Image enhancement/restoration(noise and artifact reduction), Machine learning, Microscopy - Light, Confocal, Fluorescence
Abstract: Removal of noise from fluorescence microscopy images is an important first step in many biological analysis pipelines. Current state-of-the-art supervised methods employ convolutional neural networks that are trained with clean (ground-truth) images. Recently, it was shown that self-supervised image denoising with blind spot networks achieves excellent performance even when ground-truth images are not available, as is common in fluorescence microscopy. However, these approaches, e.g. Noise2Void (N2V), generally assume pixel-wise independent noise, thus limiting their applicability in situations where spatially correlated (structured) noise is present. To overcome this limitation, we present Structured Noise2Void (StructN2V), a generalization of blind spot networks that enables removal of structured noise without requiring an explicit noise model or ground truth data. Specifically, we propose to use an extended blind mask (rather than a single pixel/blind spot), whose shape is adapted to the structure of the noise. We evaluate our approach on two real datasets and show that StructN2V considerably improves the removal of structured noise compared to existing standard and blind-spot based techniques.
|
|
12:15-12:30, Paper SuAbO3.6 | Add to My Program |
DeepFocus: A Few-Shot Microscope Slide Auto-Focus Using a Sample Invariant CNN-Based Sharpness Function |
|
Shajkofci, Adrian | Idiap Research Institute |
Liebling, Michael | Idiap Research Institute |
Keywords: Microscopy - Light, Confocal, Fluorescence, Motion compensation and analysis, Machine learning
Abstract: Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses, where the imaged objects tend to drift out of focus. AF algorithms determine an optimal distance by which to move the sample back into the focal plane. Current hardware-based methods require modifying the microscope and image-based algorithms either rely on many images to converge to the sharpest position or need training data and models specific to each instrument and imaging configuration. Here we propose DeepFocus, an AF method we implemented as a Micro-Manager plugin, and characterize its Convolutional Neural Network (CNN)-based sharpness function, which we observed to be depth co-variant and sample-invariant. Sample invariance allows our AF algorithm to converge to an optimal axial position within as few as three iterations using a model trained once for use with a wide range of optical microscopes and a single instrument-dependent calibration stack acquisition of a flat (but arbitrary) textured object. From experiments carried out both on synthetic and experimental data, we observed an average precision, given 3 measured images, of 0.30 ± 0.16 μm with a 10×, NA 0.3 objective. We foresee that this performance and low image number will help limit photodamage during acquisitions with light-sensitive samples.
|
|
12:30-12:45, Paper SuAbO3.7 | Add to My Program |
Fine-Grained Multi-Instance Classification in Microscopy through Deep Attention |
|
Fan, Mengran | University of Oxford |
Chakraborti, Tapabrata | University of Oxford |
Chang, Eric I-Chao | Microsoft Research |
Xu, Yan | Beihang University, School of Biology and Medicine; Microsoft Re |
Rittscher, Jens | University of Oxford |
Keywords: Classification, Microscopy - Light, Confocal, Fluorescence, Machine learning
Abstract: Fine-grained object recognition and classification in biomedical images poses a number of challenges. Images typically contain multiple instances (e.g. glands) and the recognition of salient structures is confounded by visually complex backgrounds. Due to the cost of data acquisition or the limited availability of specimens, data sets tend to be small. We propose a simple yet effective attention based deep architecture to address these issues, specially to achieve improved background suppression and recognition of multiple instances per image. Attention maps per instance are learnt in an end-to-end fashion. Microscopic images of fungi (new data) and a publicly available Breast Cancer Histology benchmark data set are used to demonstrate the performance of the proposed approach. Our algorithm comparison suggests that the proposed approach advances the state-of-the-art.
|
|
SuPaO1 Oral Session, Oakdale I-II |
Add to My Program |
CT Reconstruction |
|
|
Co-Chair: Ducros, Nicolas | Univ. Lyon, CREATIS |
|
14:15-14:30, Paper SuPaO1.1 | Add to My Program |
Two-Layer Residual Sparsifying Transform Learning for Image Reconstruction |
|
Zheng, Xuehang | Shanghai Jiaotong University |
Ravishankar, Saiprasad | Michigan State University |
Long, Yong | Shanghai Jiao Tong University |
Klasky, Marc | Los Alamos National Laboratory |
Wohlberg, Brendt | Los Alamos National Laboratory |
Keywords: Computed tomography (CT), Image reconstruction - analytical & iterative methods, Machine learning
Abstract: Signal models based on sparsity, low-rank and other properties have been exploited for image reconstruction from limited and corrupted data in medical imaging and other computational imaging applications. In particular, sparsifying transform models have shown promise in various applications, and offer numerous advantages such as efficiencies in sparse coding and learning. This work investigates pre-learning a two-layer extension of the transform model for image reconstruction, wherein the transform domain or filtering residuals of the image are further sparsified in the second layer. The proposed block coordinate descent optimization algorithms involve highly efficient updates. Preliminary numerical experiments demonstrate the usefulness of a two-layer model over the previous related schemes for CT image reconstruction from low-dose measurements.
|
|
14:30-14:45, Paper SuPaO1.2 | Add to My Program |
Autoregression and Structured Low-Rank Modeling of Sinograms |
|
Lobos, Rodrigo Alejandro | University of Southern California |
Leahy, Richard | USC |
Haldar, Justin | University of Southern California |
Keywords: Computational Imaging, Computed tomography (CT), Compressive sensing & sampling
Abstract: The Radon transform converts an image into a sinogram, and is often used as a model of data acquisition for many tomographic imaging modalities. Although it is well-known that sinograms possess some redundancy, we observe in this work that they can have substantial additional redundancies that can be learned directly from incomplete data. In particular, we demonstrate that sinograms approximately satisfy multiple data-dependent shift-invariant local autoregression relationships. This autoregressive structure implies that samples from the sinogram can be accurately interpolated as a shift-invariant linear combination of neighboring sinogram samples, and that a Toeplitz or Hankel matrix formed from sinogram data should be approximately low-rank. This multi-fold redundancy can be used to impute missing sinogram values or for noise reduction, as we demonstrate with real X-ray CT data.
|
|
14:45-15:00, Paper SuPaO1.3 | Add to My Program |
Adaptive Regularization for Three-Dimensional Optical Diffraction Tomography |
|
Pham, Thanh-an | Ecole Polytechnique Fédérale De Lausanne (EPFL) |
Soubies, Emmanuel | CNRS |
Ahmed, Ayoub | Ecole Polytechnique Fédérale De Lausanne (EPFL) |
Demetri, Psaltis | Ecole Polytechnique Fédérale De Lausanne (EPFL) |
Unser, Michael | Ecole Polytechnique Fédérale De Lausanne (EPFL) |
Keywords: Computational Imaging, Image reconstruction - analytical & iterative methods, Blind source separation & Dictionary learning
Abstract: Optical diffraction tomography (ODT) allows one to quantitatively measure the distribution of the refractive index of the sample. It relies on the resolution of an inverse scattering problem. Due to the limited range of views as well as optical aberrations and speckle noise, the quality of ODT reconstructions is usually better in lateral planes than in the axial direction. In this work, we propose an adaptive regularization to mitigate this issue. We first learn a dictionary from the lateral planes of an initial reconstruction that is obtained with a total-variation regularization. This dictionary is then used to enhance both the lateral and axial planes within a final reconstruction step. The proposed pipeline is validated on real data using an accurate nonlinear forward model. Comparisons with standard reconstructions are provided to show the benefit of the proposed framework.
|
|
15:00-15:15, Paper SuPaO1.4 | Add to My Program |
An Alternating Projection-Image Domains Algorithm for Spectral CT |
|
Jolivet, Frederic | Université Grenoble Alpes, CEA, LETI, F-38000 Grenoble, France |
Fournier, Clarisse | Université Grenoble Alpes, CEA, LETI, F-38000 Grenoble, France |
Garcin, Michel | Université Grenoble Alpes, CEA, LETI, F-38000 Grenoble, France |
Lenka, Zdeborová | Institut De Physique Théorique, Université Paris Saclay, CNRS, C |
Brambilla, Andrea | Université Grenoble Alpes, CEA, LETI, F-38000 Grenoble, France |
Keywords: Image reconstruction - analytical & iterative methods, Computed tomography (CT), X-ray imaging
Abstract: Spectral computerized tomography (Spectral CT) is a medical and biomedical imaging technique which uses the spectral information of the attenuated X-ray beam. Energy-resolved photon-counting detector is a promising technology for improved spectral CT imaging and allows to obtain material selective images. Two different kind of approaches resolve the problem of spectral reconstruction consisting of material decomposition and tomographic reconstruction: the two-step methods which are most often projection based methods, and the one-step methods. While the projection based methods are interesting for the fast computational time, it is not easy to introduce some spatial priors in the image domain contrary to one-step methods. We present a one-step method combining, in an alternating minimization scheme, a multi-material decomposition in the projection domain and a regularized tomographic reconstruction introducing the spatial priors in the image domain. We present and discuss promising results from experimental data.
|
|
15:15-15:30, Paper SuPaO1.5 | Add to My Program |
Block Axial Checkerboarding: A Distributed Algorithm for Helical X-Ray CT Reconstruction |
|
Murthy, Naveen | University of Michigan |
Fessler, Jeff | Univ. Michigan |
Keywords: Computational Imaging, Computed tomography (CT)
Abstract: Model-Based Iterative Reconstruction (MBIR) methods for X-ray CT provide improved image quality compared to conventional techniques like filtered backprojection (FBP), but their computational burden is undesirably high. Distributed algorithms have the potential to significantly reduce reconstruction time, but the communication overhead of existing methods has been a considerable bottleneck. This paper proposes a distributed algorithm called Block-Axial Checkerboarding (BAC) that utilizes the special structure found in helical CT geometry to reduce inter-node communication. Preliminary results using a simulated 3D helical CT scan suggest that the proposed algorithm has the potential to reduce reconstruction time in multi-node systems, depending on the balance between compute speed and communication bandwidth.
|
|
15:30-15:45, Paper SuPaO1.6 | Add to My Program |
Transfer-GAN: Multimodal CT Image Super-Resolution Via Transfer Generative Adversarial Networks |
|
Xiao, Yao | University of Florida |
Peters, Keith | University of Florida |
Fox, W. Christopher | University of Florida |
Rees, John | University of Florida |
Rajderkar, Dhanashree | University of Florida |
Arreola, Manuel | University of Florida |
Barreto, Izabella | University of Florida |
Wesley, Bolch | University of Florida |
Fang, Ruogu | University of Florida |
Keywords: Computed tomography (CT), Image enhancement/restoration(noise and artifact reduction), Brain
Abstract: Multimodal CT scans, including non-contrast CT, CT perfusion, and CT angiography, are widely used in acute stroke diagnosis and therapeutic planning. While each imaging modality has its advantage in brain cross-sectional feature visualizations, the varying image resolution of different modalities hinders the ability of the radiologist to discern consistent but subtle suspicious findings. Besides, higher image quality requires a high radiation dose, leading to increases in health risks such as cataract formation and cancer induction. In this work, we propose a deep learning-based method Transfer-GAN that utilizes generative adversarial networks and transfer learning to improve multimodal CT image resolution and to lower the necessary radiation exposure. Through extensive experiments, we demonstrate that transfer learning from multimodal CT provides substantial visualization and quantity enhancement compare to the training without learning the prior knowledge.
|
|
15:45-16:00, Paper SuPaO1.7 | Add to My Program |
Hessian Splines for Scanning Transmission X-Ray Microscopy |
|
Debarre, Thomas | Ecole Polytechnique Fédérale De Lausanne |
Watts, Benjamin | Paul Scherrer Institute |
Rösner, Benedikt | Paul Scherrer Institute |
Unser, Michael | EPFL |
Keywords: Image reconstruction - analytical & iterative methods, Other-modality, Cells & molecules
Abstract: Scanning transmission X-ray microscopy (STXM) produces images in which each pixel value is related to the measured attenuation of an X-ray beam. In practice, the location of the illuminated region does not exactly match the desired uniform pixel grid. This error can be measured using an interferometer. In this paper, we propose a spline-based reconstruction method for STXM which takes these position errors into account. We achieve this by formulating the reconstruction problem as a continuous-domain inverse problem in a spline basis, and by using Hessian nuclear-norm regularization. We solve this problem using the standard ADMM algorithm, and we demonstrate the pertinence of our approach on both simulated and real STXM data.
|
|
SuPaO2 Oral Session, Oakdale III |
Add to My Program |
Segmentation Applications and Methods I |
|
|
Chair: Uhlmann, Virginie | EMBL-EBI |
Co-Chair: Dellepiane, Silvana | Università Degli Studi Di Genova |
|
14:15-14:30, Paper SuPaO2.1 | Add to My Program |
Weakly Supervised Lesion Co-Segmentation on CT Scans |
|
Agarwal, Vatsal | NIH Clinical Center |
Tang, Youbao | National Institutes of Health |
Xiao, Jing | Ping an Technology Co., Ltd., |
Summers, Ronald | National Institutes of Health Clinical Center |
Keywords: Computed tomography (CT), Image segmentation, Computer-aided detection and diagnosis (CAD)
Abstract: Lesion segmentation in medical imaging serves as an effective tool for assessing tumor sizes and monitoring changes in growth. However, not only is manual lesion segmentation time-consuming, but it is also expensive and requires expert radiologist knowledge. Therefore many hospitals rely on a loose substitute called response evaluation criteria in solid tumors (RECIST). Although these annotations are far from precise, they are widely used throughout hospitals and are found in their picture archiving and communication systems (PACS). Therefore, these annotations have the potential to serve as a robust yet challenging means of weak supervision for training full lesion segmentation models. In this work, we propose a weakly-supervised co-segmentation model that first generates pseudo-masks from the RECIST slices and uses these as training labels for an attention-based convolutional neural network capable of segmenting common lesions from a pair of CT scans. To validate and test the model, we utilize the DeepLesion dataset, an extensive CT-scan lesion dataset that contains 32,735 PACS bookmarked images. Extensive experimental results demonstrate the efficacy of our co-segmentation approach for lesion segmentation with a mean Dice coefficient of 90.3%.
|
|
14:30-14:45, Paper SuPaO2.2 | Add to My Program |
A Fully 3D Cascaded Framework for Pancreas Segmentation |
|
Wang, Wenzhe | Zhejiang University |
Song, Qingyu | Zhejiang University |
Feng, Ruiwei | Zhejiang University, China |
Chen, Tingting | Zhejiang University |
Chen, Jintai | Zhejiang University |
Chen, Danny Z. | University of Notre Dame |
Wu, Jian | Zhejiang University |
Keywords: Image segmentation, Computed tomography (CT), Abdomen
Abstract: Convolutional Neural Networks (CNNs) have achieved remarkable results for many medical image segmentation tasks. However, segmenting small and polymorphous organs (e.g., pancreas) in 3D CT images is still highly challenging due to the complexity of such organs and the difficulties in 3D context information learning restricted by limited GPU memory. In this paper, we present a Fully 3D Cascaded Framework for pancreas segmentation in 3D CT images. We develop a 3D detection network (PancreasNet) to regress the locations of pancreas regions, and two different scales of a 3D segmentation network (SEVoxNet) to segment pancreas in a cascaded manner based on the detection results of PancreasNet. Experiments on the public NIH pancreas segmentation dataset show that we achieve 85.93% in the mean DSC and 75.38% in the mean JI, outperforming state-of-the-art results and with the fastest inference time ever reported (~200 times faster).
|
|
14:45-15:00, Paper SuPaO2.3 | Add to My Program |
MI-UNet: Improved Segmentation in Ureteroscopy |
|
Gupta, Soumya | University of Oxford |
Ali, Sharib | University of Oxford |
Goldsmith, Louise | The Churchill, Oxford University Hospitals NHS Trust |
Turney, Benjamin W. | University of Oxford |
Rittscher, Jens | University of Oxford |
Keywords: Endoscopy, Kidney, Image segmentation
Abstract: Ureteroscopy has evolved into a routine technique for treatment of kidney stones. Laser lithotripsy is commonly used to fragment the kidney stones until they are small enough to be removed. Poor image quality, presence of floating debris and severe occlusions in the endoscopy video make it difficult to target stones during the ureteroscopy procedure. A potential solution is automated localization and segmentation of the stone fragments. However, the heterogeneity of stones in terms of shape, texture, as well as colour and the presence of moving debris make the task of stone segmentation challenging. Further, dynamic background, motion blur, local deformations, occlusions and varying illumination conditions need to be taken into account during segmentation. To address these issues, we compliment state-of-the-art U-Net based segmentation strategy with the learned motion information. This technique leverages difference in motion between the large stones and surrounding debris and additionally tackles problems due to illumination variability, occlusions and other factors that are present in the frame-of-interest. The proposed motion induced U-Net (MI-UNet) architecture consists of two main components: 1) U-Net and 2) DVFNet. The quantitative results show consistent performance and improvement over most evaluation metrics. The qualitative validation also illustrate that our complimentary DVF computation is able to effectively reduce the effect of surrounding debris in contrast to U-Net.
|
|
15:00-15:15, Paper SuPaO2.4 | Add to My Program |
HBNet: Hybrid Blocks Network for Segmentation of Gastric Tumor from Ordinary CT Images |
|
Zhang, Yongtao | ShenZhen University |
Lei, Baiying | Shenzhen University |
Fu, Chao | Department of Radiology, China-Japan Friendship Hospital, Beijin |
Du, Jie | ShenZhen University |
Zhu, Xinjian | The Shenzhen University |
Han, Xiaowei | Department of Radiology, China-Japan Friendship Hospital, Beijin |
Du, Lei | Department of Radiology, China-Japan Friendship Hospital, Beijin |
Gao, Wenwen | Department of Radiology, China-Japan Friendship Hospital, Beijin |
Wang, Tianfu | Shenzhen University |
Ma, Guolin | Department of Radiology, China-Japan Friendship Hospital, Beijin |
Keywords: Abdomen, Computed tomography (CT), Image segmentation
Abstract: Gastric cancer has been one of the leading causes of cancer death. To assist doctors on diagnosis and treatment planning of gastric cancer, an accurate and automatic segmentation of gastric tumor method is very necessary for clinical practices. In this paper, we develop an improved U-Net called hybrid blocks network (HBNet) to automatically segment gastric tumor. In contrast to the standard U-Net, our proposed network only has one down-sampling operation, which further improves the performance on segmentation of small target tumor. Meanwhile, we innovatively devise a combination of squeeze-excitation residual (SERes) block and dense atrous global convolution (DAGC) block instead of the original convolution and pooling operations. Both high-level and low-level feature information of the tumor is effectively extracted. We evaluate the performance of HBNet on a self-collected ordinary CT images dataset from three medical centers. Our experiments demonstrate that the proposed network achieves quite favorable segmentation performance compared with the standard U-Net network and other state-of-the-art segmentation neural networks.
|
|
15:15-15:30, Paper SuPaO2.5 | Add to My Program |
Fully-Automated Semantic Segmentation of Wireless Capsule Endoscopy Abnormalities |
|
Paul, Sukriti | Indian Institute of Science |
Devi Gundabattula, Hanitha | P.V.P Siddhartha Institute of Technology |
Seelamantula, Chandra Sekhar | Indian Institute of Science, Bangalore |
V.R., Mujeeb | Command Hospital Air Force |
Prasad, Ajay S. | Command Hospital Air Force |
Keywords: Endoscopy, Gastrointestinal tract, Image segmentation
Abstract: Wireless capsule endoscopy (WCE) is a minimally invasive procedure performed with a tiny swallowable optical endoscope that allows exploration of the human digestive tract. The medical device transmits tens of thousands of colour images, which are manually reviewed by a medical expert. This paper highlights the significance of using inputs from multiple colour spaces to train a classical U-Net model for automated semantic segmentation of eight WCE abnormalities. We also present a novel approach of grouping similar abnormalities during the training phase. Experimental results on the KID datasets demonstrate that a U-Net with 4-channel inputs outperforms the single-channel U-Net providing state-of-the-art semantic segmentation of WCE abnormalities.
|
|
15:30-15:45, Paper SuPaO2.6 | Add to My Program |
SSN: A Stair-Shape Network for Real-Time Polyp Segmentation in Colonoscopy Images |
|
Feng, Ruiwei | Zhejiang University, China |
Lei, Biwen | Zhejiang University |
Wang, Wenzhe | Zhejiang University |
Chen, Tingting | Zhejiang University |
Chen, Jintai | Zhejiang University |
Chen, Danny Z. | University of Notre Dame |
Wu, Jian | Zhejiang University |
Keywords: Endoscopy, Abdomen, Computer-aided detection and diagnosis (CAD)
Abstract: Colorectal cancer is one of the most life-threatening malignancies, commonly occurring from intestinal polyps. Currently, clinical colonoscopy exam is an effective way for early detection of polyps and is often conducted in real-time manner. But, colonoscopy analysis is time-consuming and suffers a high miss rate. In this paper, we develop a novel stair-shape network (SSN) for real-time polyp segmentation in colonoscopy images (not merely for simple detection). Our new model is much faster than U-Net, yet yields better performance for polyp segmentation. The model first utilizes four blocks to extract spatial features at the encoder stage. The subsequent skip connection with a Dual Attention Module for each block and a final Multi-scale Fusion Module are used to fully fuse features of different scales. Based on abundant data augmentation and strong supervision of auxiliary losses, our model can learn much more information for polyp segmentation. Our new polyp segmentation method attains high performance on several datasets (CVC-ColonDB, CVC-ClinicDB, and EndoScene), outperforming state-of-the-art methods. Our network can also be applied to other imaging tasks for real-time segmentation and clinical practice.
|
|
15:45-16:00, Paper SuPaO2.7 | Add to My Program |
DiskMask: Focusing Object Features for Accurate Instance Segmentation of Elongated or Overlapping Objects |
|
Böhm, Anton | University of Freiburg |
Mayer, Nikolaus | University of Freiburg |
Brox, Thomas | University of Freiburg |
Keywords: Image segmentation, Machine learning, Classification
Abstract: Deep learning has enabled automated segmentation in a large variety of cases. Instance segmentation of touching and overlapping objects remains an open challenge. We present an end-to-end approach that focuses object detections and features to local regions in an encoder stage and derives accurate instance masks in a decoder. We avoid heavy pre- or post-processing, such as lifting or non-maximum suppression. The approach compares favorably to the current state-of-the-art on three challenging biological datasets.
|
|
SuPaO3 Oral Session, Oakdale IV-V |
Add to My Program |
Histopathology |
|
|
Chair: Rittscher, Jens | University of Oxford |
|
14:15-14:30, Paper SuPaO3.1 | Add to My Program |
Multiple Instance Learning Via Deep Hierarchical Exploration for Histology Image Classification |
|
Hering, Jan | Faculty of Electrical Engineering, Czech Technical University |
Kybic, Jan | Czech Technical University in Prague |
Keywords: Pattern recognition and classification, Histopathology imaging (e.g. whole slide imaging), Machine learning
Abstract: We present a fast hierarchical method to detect a presence of cancerous tissue in histological images. The image is not examined in detail everywhere but only inside several small regions of interest, called glimpses. The final classification is done by aggregating classification scores from a CNN on leaf glimpses at the highest resolution. Unlike in existing attention-based methods, the glimpses form a tree structure, low resolution glimpses determining the location of several higher resolution glimpses using weighted sampling and a CNN approximation of the expected scores. We show that it is possible to perform the classification with just a small number of glimpses, leading to an important speedup with only a small performance deterioration. Learning is possible using image labels only, as in the multiple instance learning (MIL) setting.
|
|
14:30-14:45, Paper SuPaO3.2 | Add to My Program |
Weakly Supervised Prostate TMA Classification Via Graph Convolutional Networks |
|
Wang, Jingwen | Brigham and Women's Hospital |
Chen, Richard | Harvard Medical School |
Lu, Ming Yang | Pathology, Brigham and Women's Hospital, Harvard Medical School |
Alex, Baras | Johns Hopkins University |
Mahmood, Faisal | Harvard Medical School |
Keywords: Histopathology imaging (e.g. whole slide imaging), Machine learning
Abstract: Histology-based grade classification is clinically important for many cancer types in stratifying patients into distinct treatment groups. In prostate cancer, the Gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. However, the subjective interpretation of Gleason score often suffers from large interobserver and intraobserver variability. Previous work in deep learning-based objective Gleason grading requires manual pixel-level annotation. In this work, we propose a weakly-supervised approach for grade classification in tissue micro-arrays (TMA) using graph convolutional networks (GCNs), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. We learn the morphometry of each cell using a contrastive predictive coding (CPC)-based self-supervised approach. Using five-fold cross-validation we demonstrate that our method can achieve a 0.9637±0.0131 AUC using only TMA-level labels. Our method also demonstrates a 36.36% improvement in AUC over standard GCNs with texture features and a 15.48% improvement over GCNs with VGG19 features. Our proposed pipeline can be used to objectively stratify low and high-risk cases, reducing inter- and intra-observer variability and pathologist workload.
|
|
14:45-15:00, Paper SuPaO3.3 | Add to My Program |
Informative Retrieval Framework for Histopathology Whole Slides Images Based on Deep Hashing Network |
|
Hu, Dingyi | Beihang University |
Zheng, Yushan | Beihang University |
Zhang, Haopeng | Beihang University |
Sun, Shujiao | Beihang University |
Shi, Jun | Hefei University of Technology |
Xie, Fengying | Beihang University |
Jiang, Zhiguo | Beihang University |
Keywords: Computer-aided detection and diagnosis (CAD), Histopathology imaging (e.g. whole slide imaging), Machine learning
Abstract: Histopathology image retrieval is an emerging application for Computer-aided cancer diagnosis. However, the current retrieval methods, especially the methods based on deep hashing, pay less attention to the characteristic of histopathology whole slide images (WSIs). The retrieved results are occasionally occupied by similar images from a few WSIs. The retrieval database cannot be sufficiently utilized. To solve these issues, we proposed an informative retrieval framework based on deep hashing network. Specifically, a novel loss function for the hashing network and a retrieval strategy are designed, which contributes more informative retrieval results without reducing the retrieval precision. The proposed method was verified on the ACDC-LungHP dataset and compared with the state-of-the-art method. The experimental results have demonstrated the effectiveness of our method in the retrieval of large-scale database containing histopathology while slide images.
|
|
15:00-15:15, Paper SuPaO3.4 | Add to My Program |
Circular Anchors for the Detection of Hematopoietic Cells Using RetinaNet |
|
Gräbel, Philipp | RWTH Aachen University |
Crysandt, Martina | Klinik Für Hämatologie, Onkologie, Hämostaseologie Und Stammzell |
Özkan, Özcan | RWTH Aachen University |
Herwartz, Reinhilde | Uniklinik RWTH Aachen |
Melanie, Baumann | Klinik Für Hämatologie, Onkologie, Hämostaseologie Und Stammzell |
Klinkhammer, Barbara Mara | RWTH Aachen University |
Boor, Peter | RWTH Aachen University, University Hospital Aachen |
Brümmendorf, Tim Hendrik | Klinik Für Hämatologie, Onkologie, Hämostaseologie Und Stammzell |
Merhof, Dorit | RWTH Aachen University |
Keywords: Machine learning, Single cell & molecule detection, Histopathology imaging (e.g. whole slide imaging)
Abstract: Analysis of the blood cell distribution in bone marrow is necessary for a detailed diagnosis of many hematopoietic diseases, such as leukemia. While this task is performed manually on microscope images in clinical routine, automating it could improve reliability and objectivity. Cell detection tasks in medical imaging have successfully been solved using deep learning, in particular with RetinaNet, a powerful network architecture that yields good detection results in this scenario. It utilizes axis-parallel, rectangular bounding boxes to describe an object's position and size. However, since cells are mostly circular, this is suboptimal. We replace RetinaNet's anchors with more suitable Circular Anchors, which cover the shape of cells more precisely. We further introduce an extension to the Non-maximum Suppression algorithm that copes with predictions that differ in size. Experiments on hematopoietic cells in bone marrow images show that these methods reduce the number of false positive predictions and increase detection accuracy.
|
|
15:15-15:30, Paper SuPaO3.5 | Add to My Program |
Prior-Aware CNN with Multi-Task Learning for Colon Images Analysis |
|
Yan, Chaoyang | Nanjing University of Information Science & Technology |
Xu, Jun | Nanjing University of Information Science and Technology |
Xie, Jiawei | Nanjing University of Information Science & Technology |
Cai, Chengfei | Nanjing University of Information Science & Technology |
Lu, Haoda | Nanjing University of Information Science & Technology |
Keywords: Histopathology imaging (e.g. whole slide imaging), Gastrointestinal tract, Classification
Abstract: Adenocarcinoma is the most common cancer, the pathological diagnosis for it is of great significance. Specifically, the degree of gland differentiation is vital for defining the grade of adenocarcinoma. Following this domain knowledge, we encode glandular regions as prior information in convolutional neural network (CNN), guiding the network's preference for glands when inferring. In this work, we propose a prior-aware CNN framework with multi-task learning for pathological colon images analysis, which contains gland segmentation and grading classification branches simultaneously. The segmentation's probability map also acts as the spatial attention for grading, emphasizing the glandular tissue and removing noise of irrelevant parts. Experiments reveal that the proposed framework achieves accuracy of 97.04% and AUC of 0.9971 on grading. Meanwhile, our model can predict gland regions with mIoU of 0.8134. Importantly, it is based on the clinical-pathological diagnostic criteria of adenocarcinoma, which makes our model more interpretable.
|
|
15:30-15:45, Paper SuPaO3.6 | Add to My Program |
Bending Loss Regularized Network for Nuclei Segmentation in Histopathology Images |
|
Wang, Haotian | University of Idaho |
Xian, Min | University of Idaho |
Vakanski, Aleksandar | University of Idaho |
Keywords: Histopathology imaging (e.g. whole slide imaging), Cells & molecules, Image segmentation
Abstract: Separating overlapped nuclei is a major challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on public datasets; however, their performance in segmenting over-lapped nuclei are limited. To address the issue, we propose the bending loss regularized network for nuclei segmentation. The proposed bending loss defines high penalties to contour points with large curvatures, and applies small pen-alties to contour points with small curvature. Minimizing the bending loss can avoid generating contours that encompass multiple nuclei. The proposed approach is validated on the MoNuSeg dataset using five quantitative metrics. It outperforms six state-of-the-art approaches on the following metrics: Aggregate Jaccard Index, Dice, Recognition Quality, and Panoptic Quality.
|
|
15:45-16:00, Paper SuPaO3.7 | Add to My Program |
Segmentation and Classification of Melanoma and Nevus in Whole Slide Images |
|
van Zon, Mike | Eindhoven University of Technology |
Stathonikos, Nikolas | University Medical Center Utrecht |
Blokx, Willeke A.M. | University Medical Center Utrecht |
Komina, Selim | 3University “Ss Ciril and Methodius“ |
Maas, Sybren L.N. | University Medical Center Utrecht |
Pluim, Josien | Eindhoven University of Technology |
van Diest, Paul J. | University Medical Center Utrecht |
Veta, Mitko | Eindhoven University of Technology |
Keywords: Histopathology imaging (e.g. whole slide imaging), Skin, Classification
Abstract: The incidence of skin cancer cases and specifically melanoma has tripled since the 1990s in The Netherlands. The early detection of melanoma can lead to an almost 100% 5-year survival prognosis dropping drastically when detected later. Studies show that pathologists can have a discordance reporting of melanoma to nevi up to 14.3%. An automated method could help support pathologists in diagnosing melanoma and prioritize cases based on a risk assessment. Our method used 563 whole slide images to train and test a system comprising of two models that segment and classify skin sections to melanoma, nevus or negative for both. We used 232 slides for training and validation and the remaining 331 for testing. The first model uses a U-Net architecture to perform a semantic segmentation and the output of that model was used to feed a convolution neural network to classify the WSI with a global label. Our method achieved a Dice score of 0.835 ± 0.08 on the segmentation of the validation set and an weighted F1-score of 0.954 on the independent test dataset. Out of the 176 melanoma slides, the algorithm managed to classify 173 correctly. Out of the 62 nevi slides the algorithm managed to correctly classify 57.
|
|
SuPbPo Poster Session, Oakdale Foyer Coral Foyer |
|
Sunday Poster PM |
|
|
|
16:00-17:30, Subsession SuPbPo-01, Oakdale Foyer Coral Foyer | |
Brain Connectivity I Poster Session, 7 papers |
|
16:00-17:30, Subsession SuPbPo-02, Oakdale Foyer Coral Foyer | |
Ultrasound Imaging and Analysis I Poster Session, 7 papers |
|
16:00-17:30, Subsession SuPbPo-03, Oakdale Foyer Coral Foyer | |
Abdomen Segmentation Poster Session, 7 papers |
|
16:00-17:30, Subsession SuPbPo-04, Oakdale Foyer Coral Foyer | |
Brain Segmentation and Characterization I Poster Session, 8 papers |
|
16:00-17:30, Subsession SuPbPo-05, Oakdale Foyer Coral Foyer | |
Machine Learning for Brain Studies I Poster Session, 9 papers |
|
16:00-17:30, Subsession SuPbPo-06, Oakdale Foyer Coral Foyer | |
Segmentation – Methods & Applications I Poster Session, 7 papers |
|
16:00-17:30, Subsession SuPbPo-07, Oakdale Foyer Coral Foyer | |
Histopathology I Poster Session, 7 papers |
|
16:00-17:30, Subsession SuPbPo-08, Oakdale Foyer Coral Foyer | |
Optical Microscopy and Analysis I Poster Session, 8 papers |
|
SuPbPo-01 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Brain Connectivity I |
|
|
Chair: Lepore, Natasha | USC / Children's Hospital Los Angeles |
Co-Chair: Ye, Chuyang | Beijing Institute of Technology |
|
16:00-17:30, Paper SuPbPo-01.1 | Add to My Program |
A Stem-Based Dissection of Inferior Fronto-Occipital Fasciculus with a Deep Learning Model |
|
Astolfi, Pietro | University of Trento |
De Benedictis, Alessandro | Neurosurgery Unit, Bambino Gesù Children’s Hospital |
Sarubbo, Silvio | S. Chiara Hospital APSS |
Bertò, Giulia | University of Trento |
Olivetti, Emanuele | Fondazione Bruno Kessler (FBK) |
Sona, Diego | Istituto Italiano Di Tecnologia (IIT) |
Avesani, Paolo | Fondazione Bruno Kessler (FBK) |
Keywords: Tractography, Diffusion weighted imaging, Brain
Abstract: The aim of this work is to improve the virtual dissection of the Inferior Frontal Occipital Fasciculus (IFOF) by combining a recent insight on white matter anatomy from ex-vivo dissection and a data driven approach with a deep learning model. Current methods of tract dissection are not robust with respect to false positives and are neglecting the neuroanatomical waypoints of a given tract, like the stem. In this work we design a deep learning model to segment the stem of IFOF and we show how the dissection of the tract can be improved. The proposed method is validated on the Human Connectome Project dataset, where expert neuroanatomists segmented the IFOF on multiple subjects. In addition we compare the results to the most recent method in the literature for automatic tract dissection.
|
|
16:00-17:30, Paper SuPbPo-01.2 | Add to My Program |
Recognition of Event-Associated Brain Functional Networks in EEG for Brain Network Based Applications |
|
Gonuguntla, Venkateswarlu | Samsung Medical Center |
Veluvolu, Kalyana C. | Kyungpook National University |
Kim, Jae-Hun | Samsung Medical Center |
Keywords: EEG & MEG, Connectivity analysis, Dimensionality reduction
Abstract: Network perspective studies of the human brain are rapidly increasing due to the advances in the field of network neuroscience. In several brain network based applications, recognition of event-associated brain functional networks (BFNs) can be crucial to understand the event processing in the brain and can play a significant role to characterize and quantify the complex brain networks. This paper presents a framework to identify the event-associated BFNs using phase locking value (PLV) in EEG. Based on the PLV dissimilarities during the rest and event tasks, we identify the reactive band and the event-associated most reactive pairs (MRPs). With the MRPs identified, the event-associated BFNs are formed. The proposed method is employed on `database for emotion analysis using physiological signals (DEAP)' data set to form the BFNs associated with several emotions. With the emotion-associated BFNs as features, comparable state-of-the-art multiple emotion classification accuracies are achieved. Result show that, with the proposed method, event-associated BFNs can be identified and can be used in brain network based applications.
|
|
16:00-17:30, Paper SuPbPo-01.3 | Add to My Program |
Association between Dynamic Functional Connectivity and Intelligence |
|
Ashrafi, Mahnaz | University of Tehran |
Soltanian-Zadeh, Hamid | University of Tehran |
Keywords: Connectivity analysis, Functional imaging (e.g. fMRI), Modeling - Knowledge
Abstract: Several studies have explored the relationship between intelligence and neuroimaging features. However, little is known about whether the temporal variations of functional connectivity of the brain regions at rest are relevant to the differences in intelligence. In this study, we have used the fMRI data and intelligence scores of 50 healthy adult subjects from the Human Connectome Project (HCP) database. We have investigated the correlation between individual intelligence scores and the total power of the high frequency components of the Fast Fourier transform (FFT) of the dynamic functional connectivity time series of the brain regions. We have found temporal variations of specific functional connections highly correlated with the individual intelligence scores. In other words, functional connections of individuals with high levels of the intelligence have smoother temporal variation or higher temporal stability than those of the individuals with low intelligence levels.
|
|
16:00-17:30, Paper SuPbPo-01.4 | Add to My Program |
Semi-Supervised Brain Lesion Segmentation Using Training Images with and without Lesions |
|
Liu, Chenghao | Beijing Institute of Technology |
Pang, Fengqian | North China University of Technology |
Liu, Yanlin | Beijing Institute of Technology |
Liang, Kongming | PKU |
Li, Xiuli | Deepwise Inc |
Zeng, Xiangzhu | Peking University Third Hospital, Beijing, China |
Ye, Chuyang | Beijing Institute of Technology |
Keywords: Image segmentation, Brain, Diffusion weighted imaging
Abstract: Semi-supervised approaches have been developed to improve brain lesion segmentation based on convolutional neural networks (CNNs) when annotated data is scarce. Existing methods have exploited unannotated images with lesions to improve the training of CNNs. In this work, we explore semi-supervised brain lesion segmentation by further incorporating images without lesions. Specifically, using information learned from annotated and unannotated scans with lesions, we propose a framework to generate synthesized lesions and their annotations simultaneously. Then, we attach them to normal-appearing scans using a statistical model to produce synthesized training samples, which are used together with true annotations to train CNNs for segmentation. Experimental results show that our method outperforms competing semi-supervised brain lesion segmentation approaches.
|
|
16:00-17:30, Paper SuPbPo-01.5 | Add to My Program |
Compensatory Brain Connection Discovery in Alzheimer’s Disease |
|
Aganj, Iman | Martinos Center, MGH, Harvard |
Frau-Pascual, Aina | Massachusetts General Hospital, Harvard Medical School |
Iglesias, Juan Eugenio | University College London |
Yendiki, Anastasia | Harvard Medical School |
Augustinack, Jean | Massachusetts General Hospital, Harvard Medical School |
Salat, David | Massachusetts General Hospital, Harvard Medical School |
Fischl, Bruce | A. A. Martinos Center for Biomedical Imaging, Dept. of Radiology |
Keywords: Connectivity analysis, Diffusion weighted imaging, Brain
Abstract: Identification of the specific brain networks that are vulnerable or resilient in neurodegenerative diseases can help to better understand the disease effects and derive new connectomic imaging biomarkers. In this work, we use brain connectivity to find pairs of structural connections that are negatively correlated with each other across Alzheimer’s disease (AD) and healthy populations. Such anti-correlated brain connections can be informative for identification of compensatory neuronal pathways and the mechanism of brain networks’ resilience to AD. We find significantly anti-correlated connections in a public diffusion-MRI database, and then validate the results on other databases.
|
|
16:00-17:30, Paper SuPbPo-01.6 | Add to My Program |
A Generalized Framework of Pathlength Associated Community Estimation for Brain Structural Network |
|
Chen, Yurong | University of Pittsburgh |
Tang, Haoteng | University of Pittsburgh |
Guo, Lei | University of Pittsburgh |
Peven, Jamie C. | University of Pittsburgh |
Huang, Heng | University of Pittsburgh |
Leow, Alex D. | University of Illinois at Chicago |
Lamar, Melissa | Rush University Medical Center |
Zhan, Liang | University of Pittsburgh |
Keywords: Diffusion weighted imaging, Connectivity analysis, Brain
Abstract: Diffusion MRI-derived brain structural network has been widely used in brain research and community or modular structure is one of popular network features, which can be extracted from network edge-derived pathlengths. Conceptually, brain structural network edges represent the connecting strength between pair of nodes, thus non-negative.Many studies have demonstrated that each brain network edge can be affected by many confounding factors (e.g. age, sex, etc.) and this influence varies on each edge. However, after applying generalized linear regression to remove those confounding’s effects, some network edges may become negative, which leads to barriers in extracting the community structure. In this study, we propose a novel generalized framework to solve this negative edge issue in extracting the modular structure from brain structural network. We have compared our framework with traditional Q method. The results clearly demonstrated that our framework has significant advantages in both stability and sensitivity.
|
|
16:00-17:30, Paper SuPbPo-01.7 | Add to My Program |
Characterizing the Propagation Pattern of Neurodegeneration in Alzheimer’s Disease by Longitudinal Network Analysis |
|
Wang, Yueting | University of North Carolina at Chapel Hill |
Yang, Defu | Hangzhou Dianzi University |
Li, Quefeng | University of North Carolina at Chapel Hill |
Kaufer, Daniel | University of North Carolina at Chapel Hill |
Styner, Martin | UNC at Chapel Hill |
Wu, Guorong | University of North Carolina at Chapel Hill |
Keywords: Population analysis, Connectivity analysis, Brain
Abstract: Converging evidence shows that Alzheimer’s disease (AD) is a neurodegenerative disease that represents a disconnection syndrome, whereby a large-scale brain network is progressively disrupted by one or more neuropathological processes. However, the mechanism by which pathological entities spread across a brain network is largely unknown. Since pathological burden may propagate trans-neuronally, we propose to characterize the propagation pattern of neuropathological events spreading across relevant brain networks that are regulated by the organization of the network. Specifically, we present a novel mixed-effect model to quantify the relationship between longitudinal network alterations and neuropathological events observed at specific brain regions, whereby the topological distance to hub nodes, high-risk AD genetics, and environmental factors (such as education) are considered as predictor variables. Similar to many cross-section studies, we find that AD-related neuropathology preferentially affects hub nodes. Furthermore, our statistical model provides strong evidence that abnormal neuropathological burden diffuses from hub nodes to non-hub nodes in a prion-like manner, whereby the propagation pattern follows the intrinsic organization of the large-scale brain network.
|
|
SuPbPo-02 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Ultrasound Imaging and Analysis I |
|
|
Chair: Kouamé, Denis | Université De Toulouse III, IRIT UMR CNRS 5505 |
Co-Chair: Luo, Jianwen | Tsinghua University |
|
16:00-17:30, Paper SuPbPo-02.1 | Add to My Program |
An Improved Deep Learning Approach for Thyroid Nodule Diagnosis |
|
Guo, Xiangdong | Anhui University |
Zhao, Haifeng | School of Computer Science and Technology, Anhui University |
Tang, Zhenyu | Beihang University |
Keywords: Ultrasound, Thyroid, Classification
Abstract: Although thyroid ultrasonography (US) has been widely applied, it is still difficult to distinguish benign and malignant nodules. Currently, convolutional neural network (CNN) based methods have been proposed and shown promising performance for benign and malignant nodules classification. It is known that the US images are usually captured in multi-angles, and the same thyroid in different US images have inconsistent content. However, most of the existing CNN based methods extract features using fixed convolution kernels, which could be a big issue for processing US images. Moreover, fully-connected (FC) layers are usually adopted in CNN, which could cause the loss of inter-pixel relations. In this paper, we propose a new CNN which is integrated with squeeze-and-excitation (SE) module and maximum retention of inter-pixel relations module (CNN-SE-MPR). It can adaptively select features from different US images and preserve the inter-pixel relations. Moreover, we introduce transfer learning to avoid problems such as local optimum and data insufficiency. The proposed network is tested on 407 thyroid US images collected from cooperated hospitals. Confirmed by ablation experiments and the comparison experiments under the state-of-the-art methods, it is shown that our method improves the accuracy of the diagnosis results.
|
|
16:00-17:30, Paper SuPbPo-02.2 | Add to My Program |
Directional Beam Focusing Based Dual Apodization Approach for Improved Vector Flow Imaging |
|
A. N., Madhavanunni | Indian Institute of Technology Palakkad, India |
Raveendranatha Panicker, Mahesh | Indian Institute of Technology Palakkad |
Keywords: Ultrasound, Vessels, Elastography imaging
Abstract: The design of apodization and windowing functions forms the predominant part of the beamforming process in ultrasound imaging. However, the detailed analysis of apodization in the case of beamforming for flow imaging is very limited in the literature. This paper introduces the concept of dual apodization technique in vector triangulation for ultrasound-based vector flow imaging. The approach utilizes the idea of multiple apodization to induce a steering effect at receive along with sidelobe suppression for the delay compensated radio-frequency (RF) signals. The method is investigated using extensive simulations for transverse flows with different flow profiles at different velocities. The simulation study using 192 element 3 MHz linear array shows an improvement in resolution and better clutter suppression. The proposed approach is further analyzed using various conventional and data adaptive apodization techniques for a gradient flow profile. The error variance in velocity magnitude estimate is as low as 5.0251×10 -4 and the mean angle error is +0.7358° using Hanning-Gaussian apodization for a transverse gradient flow with a peak velocity of 0.25m/s.
|
|
16:00-17:30, Paper SuPbPo-02.3 | Add to My Program |
Compressed Sensing for Data Reduction in Synthetic Aperture Ultrasound Imaging: A Feasibility Study |
|
R, Anand R | Indian Institute of Technology Madras, Chennai, India |
Thittai, Arun Kumar | IIT MADRAS |
Keywords: Compressive sensing & sampling, Ultrasound
Abstract: Compressed Sensing (CS) has been applied by a few researchers to improve the frame rate of synthetic aperture (SA) ultrasound imaging. However, there appear to be no reports on reducing the number of receive elements by exploiting CS approach. In our previous work, we have proposed a strategic undersampling scheme based on Gaussian distribution for focused ultrasound imaging. In this work, we propose and evaluate three sampling schemes for SA to acquire RF data from a reduced number of receive elements. The effect of sampling schemes on CS recovery was studied using simulation and experimental data. In spite of using only 50% of the receive elements, it was found that the ultrasound images using the Gaussian sampling scheme had comparable resolution and contrast with respect to the reference image obtained using all the receive elements. Thus, the findings suggest a possibility to reduce the receive channel count of SA ultrasound system without practically sacrificing the image quality.
|
|
16:00-17:30, Paper SuPbPo-02.4 | Add to My Program |
High-Frequency Quantitative Photoacoustic Imaging and Pixel-Level Tissue Classification |
|
Basavarajappa, Lokesh | University of Texas at Dallas |
Hoyt, Kenneth | University of Texas at Dallas |
Keywords: Optical coherence tomography, Ultrasound, Classification
Abstract: The recently proposed frequency-domain technique for photoacoustic (PA) image formation helps to differentiate between different-sized structures. Although this technique has provided encouraging preliminary results, it currently lacks a mathematical framework. H-scan ultrasound (US) imaging was introduced for characterizing acoustic scattering behavior at the pixel level. This US imaging technique relies on matching a model that describes US image formation to the mathematics of a class of Gaussian-weighted Hermite polynomial (GWHP) functions. Herein, we propose the extrapolation of the H-scan US image processing method to the analysis of PA signals. Radiofrequency (RF) PA data were obtained using a Vevo 3100 with LAZR-X system (Fujifilm VisualSonics). Experiments were performed using tissue-mimicking phantoms embedded optical absorbing spherical scatterers. Overall, preliminary results demonstrate that H-scan US-based processing of PA signals can help distinguish micrometer-sized objects of varying size.
|
|
16:00-17:30, Paper SuPbPo-02.5 | Add to My Program |
A FILTERED DELAY WEIGHT MULTIPLY and SUM (F-DwMAS) BEAMFORMING for ULTRASOUND IMAGING: PRELIMINARY RESULTS |
|
Vayyeti, Anudeep | Indian Institute of Technology Madras |
Thittai, Arun Kumar | IIT MADRAS |
Keywords: Ultrasound, Image reconstruction - analytical & iterative methods, Image quality assessment
Abstract: In this paper, the development of a modified beamforming method, named as, Filtered Delay Weight Multiply and Sum (F-DwMAS) beamforming is reported. The developed F-DwMAS method was investigated on a minimum-redundancy synthetic aperture technique, called as 2 Receive Synthetic Aperture Focussing Technique (2R-SAFT), which uses one element in the transmit and two consecutive elements in the receive, for achieving high-quality imaging in low complex ultrasound systems. Notably, in F-DwMAS, an additional aperture window function is designed and incorporated to the recently introduced F-DMAS method. The different methods of F-DwMAS, F-DMAS and Delay and Sum (DAS) were compared in terms of Lateral Resolution (LR), Axial Resolution (AR), Contrast Ratio (CR) and contrast-to-noise ratio (CNR) in a simulation study. Results show that the proposed F-DwMAS resulted in improvements of LR by 22.86 % and 25.19 %, AR by 5.18 % and 11.06 % and CR by 152 % and 112.8 % compared to those obtained using F-DMAS and DAS, respectively. However, CNR of F-DwMAS was 12.3 % less compared to DAS, but 103.09 % more than F-DMAS. Hence, it can be concluded that the image quality improved by F-DwMAS compared to DAS and F-DMAS.
|
|
16:00-17:30, Paper SuPbPo-02.6 | Add to My Program |
Reflection Ultrasound Tomography Using Localized Freehand Scans |
|
Benjamin, Alex | MIT |
Ely, Gregory | Massachusetts Institute of Technology |
Fincke, Jonathan | Massachusetts Institute of Technology |
Anthony, Brian W. | Massachusetts Institute of Technology |
Keywords: Ultrasound, Inverse methods, Medical robotics
Abstract: Speed of sound (SOS) is a biomarker that aides clinicians in tracking the onset and progression of diseases such as breast cancer and fatty liver disease. In this paper, we propose a framework to generate accurate, 2D SOS maps with a commercial ultrasound probe. We simulate freehand ultrasound probe motion and use a multi-look framework for reflection travel time tomography. In these simulations, the ``measured'' travel times are computed using a bent-ray Eikonal solver and direct inversion for compressional speed of sound is performed. We have shown that the assumption of straight rays breaks down for large velocity perturbations (greater than 1 percent). The error increases 70 fold for a velocity perturbation increase of 1.5 percent. Moreover, the use of multiple looks greatly aides the inversion process. Simulated RMSE drops by roughly 15 dB when the maximum scanning angle is increased from 0 to 45 degrees.
|
|
16:00-17:30, Paper SuPbPo-02.7 | Add to My Program |
Regularized Kurtosis Imaging and Its Comparison with Regularized Log Spectral Difference and Regularized Nakagami Imaging for Microwave Hyperthermia Hotspot Monitoring |
|
Kothawala, AliArshad | Indian Institute of Technology Madras |
Baskaran, Divya Baskaran | Indian Institute of Technology Madras |
Arunachalam, Kavitha | Duke University |
Thittai, Arun Kumar | IIT MADRAS |
Keywords: Ultrasound, Microwave, Thermal imaging
Abstract: Microwave hyperthermia makes use of microwaves to deliver heat to biological tissues. Real time temperature monitoring during treatment is important for efficacy and effectiveness of the treatment. Non-invasive methods such as CT, MR and ultrasound (US) have been actively researched for use in hyperthermia monitoring. US has inherent advantages of real-time imaging, portability and non-ionizing nature. It is also known from the literature that acoustic properties of tissue are sensitive to temperature and this has been harnessed to track the evolution of hotspot and temperature in high temperature zones encountered in ablation treatments. However, their usage in low temperature zones typically observed in hyperthermia appears to be less explored. This study introduces an improved method of regularized Kurtosis imaging (RKI) and compares its performance against regularized log spectral difference (RLSD) and regularized Nakagami imaging methods. The performance of these methods is compared against the ground truth estimated from IR thermal images in an experimental study on tissue-mimicking PAG-agar based phantoms. The error in the area estimated by RKI was 10.6%. The error in the lateral and axial co-ordinate of the centroid was 5.92 % and 0.47%, respectively. The structural similarity index was 0.82 for RKI when compared with RLSD and RNI having score of 0.76 and 0.72, respectively. The results are promising and offer an alternative way to track the hotspot during microwave hyperthermia treatment.
|
|
SuPbPo-03 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Abdomen Segmentation |
|
|
Chair: Cunha, Alexandre | California Institute of Technology |
|
16:00-17:30, Paper SuPbPo-03.1 | Add to My Program |
CNN in CT Image Segmentation: Beyond Loss Function for Exploiting Ground Truth Images |
|
Song, Youyi | The Hong Kong Polytechnic University |
Yu, Zhen | SZU |
Zhou, Teng | Zhouteng@stu.edu.cn |
Teoh, Jeremy Yuen-Chun | The Chinese University of Hong Kong |
Lei, Baiying | Shenzhen University |
Choi, Kup-Sze | The Hong Kong Polytechnic University |
Qin, Jing | Center for Smart Health, School of Nursing, the Hong Kong Polyte |
Keywords: Image segmentation, Abdomen, Computed tomography (CT)
Abstract: Exploiting more information from ground truth (GT) images now is a new research direction for further improving CNN's performance in CT image segmentation. Previous methods focus on devising the loss function for fulfilling such a purpose. However, it is rather difficult to devise a general and optimization-friendly loss function. We here present a novel and practical method that exploits GT images beyond the loss function. Our insight is that feature maps of two CNNs trained respectively on GT and CT images should be similar on some metric space, because they both are used to describe the same objects for the same purpose. We hence exploit GT images by enforcing such two CNNs' feature maps to be consistent. We assess the proposed method on two data sets, and compare its performance to several competitive methods. Extensive experimental results show that the proposed method is effective, outperforming all the compared methods.
|
|
16:00-17:30, Paper SuPbPo-03.2 | Add to My Program |
Progressive Abdominal Segmentation with Adaptively Hard Region Prediction and Feature Enhancement |
|
Wang, Qin | The Chinese University of Hongkong(Shenzhen) & Shenzhen Research |
Zhao, Weibing | The Chinese University of Hong Kong, Shenzhen, Shenzhen Research |
Zhang, Ruimao | Sensetime Research |
Li, Zhen | Chinese University of Hong Kong, Shenzhen |
Shu Guang, Cui | CUHK-SZ |
Keywords: Image segmentation, Computer-aided detection and diagnosis (CAD), Machine learning
Abstract: Abdominal multi-organ segmentation achieves much attention in recent medical image analysis. In this paper, we propose a novel progressive framework to promote the segmentation accuracy of abdominal organs with various shapes and small sizes. The entire framework consists of three parts: 1) a Global Segmentation Module extracting the pixel-wise global feature representation; 2) a Localization Module adaptively discovering the top-n hard local regions and effective both in training and testing phase; 3) an Enhancement Module enhancing the features of hard local regions and aggregating with the global features to refine the final representation. Specifically, we predefine 512 region proposals on the cross-sectional view of the CT image to generate coordinates pseudo labels which can supervise Localization Module. In the training phase, we calculate the segmentation error of each region proposal and select the eight ones with the lowest Dice scores as the hard regions. Once these hard regions are determined, their center coordinates are adopted as the pseudo labels to train the Localization Network by using Manhattan Distance Loss. For inference, the entire model directly accomplishes the hard region localization and feature enhancement to promote pixel-wise accuracy. Without bells and whistles, extensive experimental results demonstrate that the proposed method outperforms its counterparts.
|
|
16:00-17:30, Paper SuPbPo-03.3 | Add to My Program |
An Efficient Hybrid Model for Kidney Tumor Segmentation in Ct Images |
|
Yan, Xu | CUHK-SZ |
Yuan, Kun | University of Ottawa |
Zhao, Weibing | The Chinese University of Hong Kong, Shenzhen, Shenzhen Research |
Wang, Sheng | King Abdullah University of Science and Technology |
Li, Zhen | Chinese University of Hong Kong, Shenzhen |
Shu Guang, Cui | CUHK-SZ |
Keywords: Machine learning, Kidney, Computed tomography (CT)
Abstract: Kidney tumor segmentation from CT-volumes is essential for lesion diagnosis. Considering excessive GPU memory requirements for 3D medical images, slices and patches are exploited for training and inference in conventional U-Net variant architectures, which inevitably hampers contextual learning. In this paper, we propose a novel effective hybrid model for kidney tumor segmentation in CT images with two parts: 1) Foreground Segmentation Network; 2) Sparse PointCloud Segmentation Network. Specifically, Foreground Segmentation Network firstly segments the foreground, i.e., kidneys with tumors, from background in voxel grid using classical V-Net. Secondly, we represent the obtained foreground regions as point clouds and feed them into the Sparse PointCloud Segmentation Networks to conduct fine-grained segmentation for kidney and tumor. The critical module embedded in the second part is an efficient Submanifold Sparse Convolutional Networks (SSCNs). By exploiting SSCNs, our proposed model can take all foreground as input for better context learning in a memory-efficient manner, and consider the anisotropy of CT images as well. Experiments show that our model can achieve state-of-the-art tumor segmentation while reducing GPU resource demand significantly.
|
|
16:00-17:30, Paper SuPbPo-03.4 | Add to My Program |
Diagnostic Image Quality Assessment and Classification in Medical Imaging: Opportunities and Challenges |
|
Ma, Jeffrey | California Institute of Technology |
Nakarmi, Ukash | Stanford University |
Yue Sik Kin, Cedric | Stanford University, Department of Radiology |
Sandino, Christopher | Stanford University, Department of Electrical Engineering |
Cheng, Joseph | Stanford University |
Syed, Ali Bin | Stanford University, Department of Radiology |
Wei, Peter | Stanford University, Department of Radiology |
Pauly, John M. | Stanford University, Department of Electrical Engineering |
Vasanawala, Shreyas | Stanford University |
Keywords: Magnetic resonance imaging (MRI), Image quality assessment, Abdomen
Abstract: Magnetic Resonance Imaging (MRI) suffers from several artifacts, the most common of which are motion artifacts. These artifacts often yield images that are of non-diagnostic quality. To detect such artifacts, images are prospectively evaluated by experts for their diagnostic quality, which necessitates patient-revisits and rescans whenever non-diagnostic quality scans are encountered. This motivates the need to develop an automated framework capable of accessing medical image quality and detecting diagnostic and non-diagnostic images. In this paper, we explore several convolutional neural network-based frameworks for medical image quality assessment and investigate several challenges therein.
|
|
16:00-17:30, Paper SuPbPo-03.5 | Add to My Program |
A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation |
|
Hou, Xiaoshuai | Ping an Healthcare Technology |
Chunmei, Xie | Ping an Healthcare Technology |
Li, Fengyi | Ping an Healthcare Technology |
Wang, Jiaping | Ping an Healthcare Technology |
Lv, Chuanfeng | PingAn Tech |
Xie, Guotong | PingAn Tech |
Nan, Yang | Ping an Healthcare Technology |
Keywords: Kidney, Computer-aided detection and diagnosis (CAD), Image segmentation
Abstract: The morphological characteristics of kidney tumor is crucial factor for radiologists to make accurate diagnosis and treatment. Unfortunately, performing quantitative study of the relationship between kidney tumor morphology and clinical outcomes is very difficult because kidney tumor varies dramatically in its size, shape, location, etc. Automatic semantic segmentation of kidney and tumor is a promising tool towards developing advanced surgical planning techniques. In this work, we present a triple-stage self-guided network for kidney tumor segmentation task. The low-resolution net can roughly locate the volume of interest (VOI) from down-sampled CT images, while the full-resolution net and tumor refine net can extract accurate boundaries of kidney and tumor within VOI from full resolution CT images. We innovatively propose dilated convolution blocks (DCB) to replace the traditional pooling operations in deeper layers of U-Net architecture to retain detailed semantic information better. Besides, a hybrid loss of dice and weighted cross entropy is used to guide the model to focus on voxels close to the boundary and hard to be distinguished. We evaluate our method on the KiTS19 (MICCAI 2019 Kidney Tumor Segmentation Challenge) test dataset and achieve 0.9674, 0.8454 average dice for kidney and tumor respectively, which ranked the 2nd place in the KiTS19 challenge.
|
|
16:00-17:30, Paper SuPbPo-03.6 | Add to My Program |
Automated Measurement of Pancreatic Fat and Iron Concentration Using Multi-Echo and T1-Weighted Mri Data |
|
Basty, Nicolas | University of Westminster |
Liu, Yi | Calico Life Sciences |
Cule, Madeleine | Calico Life Sciences |
Thomas, Elizabeth Louise | University of Westminster |
Bell, Jimmy David | University of Westminster |
Whitcher, Brandon | University of Westminster |
Keywords: Magnetic resonance imaging (MRI), Quantification and estimation, Abdomen
Abstract: We present an automated method for estimation of proton density fat fraction and iron concentration in the pancreas using both structural and quantitative imaging data present in the UK Biobank abdominal MRI acquisition protocol. Our method relies on segmenting 3D T1-weighted MRI data using a V-net convolutional neural network and extracting the location of the multi-echo slice through the segmented volume. We finally estimate the fat and iron content in the pancreas using the segmentation as a mask on the multi-echo data. Our segmentation model achieves a mean dice similarity coefficient of 0.842±0.071 on unseen data, which is comparable to the current state of the art for 3D segmentation of the pancreas. The proposed method enables an enhanced analysis of spatial distribution of proton density fat fraction and iron concentration over the current practice of manually placing regions of interest on multi-echo data.
|
|
16:00-17:30, Paper SuPbPo-03.7 | Add to My Program |
FGB: Feature Guidance Branch for Organ Detection in Medical Images |
|
Wang, Yixin | Chinese Academy of Sciences; University of Chinese Academy of Sci |
Zhang, Yao | University of Chinese Academy of Sciences |
Liu, Li | Lenovo AI Lab |
Zhong, Cheng | Lenovo AI Lab |
Tian, Jiang | Lenovo AI Lab |
Zhang, Yang | Lenovo AI Lab |
Shi, Zhongchao | Lenovo AI Lab |
He, Zhiqiang | Lenovo |
Keywords: Abdomen, Computed tomography (CT), Computer-aided detection and diagnosis (CAD)
Abstract: In this paper, we propose a novel method that detects and locates different abdominal organs from CT images. We 1) utilize the distributions of organs on CT images as a prior to guide object localization; 2) design an efficient guidance map and propose an interpretable scoring method, feature guidance branch(FGB) to filtrate low-level feature maps by scoring for them; 3) establish effective relations among feature maps by visualization to enhance interpretability. Evaluated with three public datasets, the proposed method outperforms the baseline model on all tasks with a remarkable margin. Furthermore, we conduct exhaustive visualization experiments to verify the rationality and effectiveness of our proposed model.
|
|
SuPbPo-04 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Brain Segmentation and Characterization I |
|
|
Chair: Staib, Lawrence H. | Yale University |
Co-Chair: Esteban, Oscar | Stanford University |
|
16:00-17:30, Paper SuPbPo-04.1 | Add to My Program |
Annotation-Free Gliomas Segmentation Based on a Few Labeled General Brain Tumor Images |
|
Dong, Hexin | Peking University |
Yu, Fei | Peking University |
Jiang, Han | OpenBayes Inc |
Zhang, Hua | Beijing Tiantan Hospital Affiliated to Capital Medical Universit |
Dong, Bin | Peking University |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Zhang, Li | Peking University |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Brain
Abstract: Pixel-level labeling for medical image segmentation is time-consuming and sometimes infeasible. Therefore, using a small amount of labeled data in one domain to help train a reasonable segmentation model for unlabeled data in another domain becomes an important need in medical image segmentation. In this work, we propose a new segmentation framework based on unsupervised domain adaptation and semi-supervised learning, which uses a small amount of labeled general brain tumor images and learns an effective model to segment independent brain gliomas images. Our method contains two major parts. First, we use unsupervised domain adaptation to generate synthetic general brain tumor images from the brain gliomas images. Then, we apply semi-supervised learning method to train a segmentation model with a small number of labeled general brain tumor images and the unlabeled synthetic images. The experimental results show that our proposed method can use approximate 10% of labeled data to achieve a comparable accuracy of the model trained with all labeled data.
|
|
16:00-17:30, Paper SuPbPo-04.2 | Add to My Program |
6-Month Infant Brain MRI Segmentation Guided by 24-Month Data Using Cycle-Consistent Adversarial Networks |
|
Bui, Duc Toan | University of North Carolina at Chapel Hill |
Wang, Li | UNC-CHAPEL HILL |
Lin, Weili | UNC-CHAPEL HILL |
Li, Gang | University of North Carolina at Chapel Hill |
Shen, Dinggang | UNC-Chapel Hill |
Keywords: Image segmentation, Image synthesis, Brain
Abstract: Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.
|
|
16:00-17:30, Paper SuPbPo-04.3 | Add to My Program |
VoteNet+ : An Improved Deep Learning Label Fusion Method for Multi-Atlas Segmentation |
|
Ding, Zhipeng | University of North Carolina at Chapel Hill |
Han, Xu | The University of North Carolina at Chapel Hill |
Niethammer, Marc | University of North Carolina at Chapel Hill |
Keywords: Image segmentation, Machine learning, Brain
Abstract: In this work, we improve the performance of multi-atlas segmentation (MAS) by integrating the recently proposed VoteNet model with the joint label fusion (JLF) approach. Specifically, we first illustrate that using a deep convolutional neural network to predict atlas probabilities can better distinguish correct atlas labels from incorrect ones than relying on image intensity difference as is typical in JLF. Motivated by this finding, we propose network VoteNet+, an improved deep network to locally predict the probability of an atlas label to differ from the label of the target image. Furthermore, we show that JLF is more suitable for the VoteNet framework as a label fusion method than plurality voting. Lastly, we use Platt scaling to calibrate the probabilities of our new model. Results on LPBA40 3D MR brain images show that our proposed method can achieve better performance than VoteNet.
|
|
16:00-17:30, Paper SuPbPo-04.4 | Add to My Program |
Automatic Segmentation of White Matter Tracts Using Multiple Brain Mri Sequences |
|
Nelkenbaum, Ilya | Tel Aviv University |
Tsarfaty, Galia | Sheba Medical Center |
Kiryati, Nahum | Tel Aviv University |
Konen, Eli | Diagnostic Imaging Unit, Sheba Medical Center |
Mayer, Arnaldo | Sheba Medical Center |
Keywords: Image segmentation, Brain, Magnetic resonance imaging (MRI)
Abstract: White matter tractography mapping is a must in neuro-surgical planning and navigation to minimize risks of iatrogenic damages. Clinical tractography pipelines still require time consuming manual operations and significant neuro-anatomical expertise, to accurately seed the tracts and remove tractography outliers. The automatic segmentation of white matter (WM) tracts using deep neural networks has been recently demonstrated. However, most of the works in this area use a single brain MRI sequence, whereas neuro-radiologists rely on 2 or more MRI sequences, e.g. T1w and the principal direction of diffusion (PDD), for pre-surgical WM mapping. In this work, we propose a novel neural architecture for the automatic segmentation of white matter tracts by fusing multiple MRI sequences. The proposed method is demonstrated and validated on joint T1w and PDD input sequences. It is shown to compare favorably against state-of-the art methods (Vnet, TractSeg) on the Human Connectome Project (HCP) brain scans dataset for clinically important WM tracts.
|
|
16:00-17:30, Paper SuPbPo-04.5 | Add to My Program |
Spectral Graph Transformer Networks for Brain Surface Parcellation |
|
He, Ran | Beijing Institute of Technology |
Gopinath, Karthik | ETS Montreal |
Desrosiers, Christian | École De Technologie Supérieure |
Lombaert, Herve | ETS Montreal |
Keywords: Shape analysis, Magnetic resonance imaging (MRI), Brain
Abstract: The analysis of the brain surface modeled as a graph mesh is a challenging task. Conventional deep learning approaches often rely on data lying in the Euclidean space. As an extension to irregular graphs, convolution operations are defined in the Fourier or spectral domain. This spectral domain is obtained by decomposing the graph Laplacian, which captures relevant shape information. However, the spectral decomposition across different brain graphs causes inconsistencies between the eigenvectors of individual spectral domains, causing the graph learning algorithm to fail. Current spectral graph convolution methods handle this variance by separately aligning the eigenvectors to a reference brain in a slow iterative step. This paper presents a novel approach for learning the transformation matrix required for aligning brain meshes using a direct data-driven approach. Our alignment and graph processing method provides a fast analysis of brain surfaces. The novel Spectral Graph Transformer (SGT) network proposed in this paper uses very few randomly sub-sampled nodes in the spectral domain to learn the alignment matrix for multiple brain surfaces. We validate the use of this SGT network along with a graph convolution network to perform cortical parcellation. Our method on 101 manually-labeled brain surfaces shows improved parcellation performance over a no-alignment strategy, gaining a significant speed (1400 fold) over traditional iterative alignment approaches.
|
|
16:00-17:30, Paper SuPbPo-04.6 | Add to My Program |
A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation |
|
Zhou, Tongxue | INSA Rouen, University De Rouen |
Ruan, Su | Universite De Rouen |
Guo, Yu | Tianjin University |
Canu, Stéphane | Normandie Univ, INSA Rouen, UNIROUEN, UNIHAVRE, LITIS |
Keywords: Image segmentation, Magnetic resonance imaging (MRI), Brain
Abstract: Brain tumor segmentation in magnetic resonance images (MRI) is necessary for diagnosis, monitoring and treatment, while manual segmentation is time-consuming, labor-intensive and subjective. In addition, single modality can’t provide enough information for accurate segmentation. In this paper, we propose a multi-modality fusion network based on attention mechanism for brain tumor segmentation. Our network includes four channel-independent encoding paths to independently extract features from four modalities, the feature fusion block to fuse the four features, and a decoding path to finally segment the tumor. The channel-independent encoding path can capture modality-specific features, However, not all the features extracted from the encoders are useful for segmentation. In this paper, we propose to use the attention mechanism to guide the fusion block. In this way, the modality-specific features can be separately recalibrated along the channel and space paths, which can suppress less informative features and emphasize the useful ones. The obtained shared latent feature representation is finally projected by the decoder to the brain tumor segmentation. The experiment results on BraTS 2017 dataset demonstrate the effectiveness of our proposed method.
|
|
16:00-17:30, Paper SuPbPo-04.7 | Add to My Program |
Choroid Plexus Segmentation Using Optimized 3D U-Net |
|
Zhao, Li | Children’s National Hospital |
Feng, Xue | University of Virginia |
Meyer, Craig H. | University of Virginia |
Alsop, David | Beth Israel Deaconess Medical Center and Harvard Medical School |
Keywords: Machine learning, Magnetic resonance imaging (MRI), Brain
Abstract: The choroid plexus is the primary organ that secretes the cerebrospinal fluid. Its structure and function may be associated with the brain drainage pathway and the clearance of amyloid-beta in Alzheimer’s Disease. However, choroid plexus segmentation methods have rarely been studied. Therefore, the purpose of this work is to fill the gap using a deep convolutional network. MR images of 10 healthy subjects (75.5±8.0 years) were retrospectively selected from the Alzheimer's Disease Neuroimaging Initiative database (ADNI). The benchmark of choroid plexus segmentation was provided by the FreeSurfer package and manual correction. A 3D U-Net was developed and optimized in the patch extraction, augmentation, and loss function. In leave-one-out cross-validations, the optimized U-Net provided superior performance compared to the FreeSurfer results (Dice score 0.732±0.046 vs 0.581±0.093, Jaccard coefficient 0.579±0.057 vs 0.416±0.091, 95% Hausdorff distance 1.871±0.549 vs 7.257±5.038, and sensitivity 0.761±0.078 vs 0.539±0.117).
|
|
16:00-17:30, Paper SuPbPo-04.8 | Add to My Program |
Robust Brain Magnetic Resonance Image Segmentation for Hydrocephalus Patients: Hard and Soft Attention |
|
Ren, Xuhua | Shanghai Jiao Tong University |
Huo, Jiayu | Shanghai Jiao Tong University |
Xuan, Kai | Shanghai Jiao Tong University |
Wei, Dongming | Shanghai Jiao Tong University |
Zhang, Lichi | Shanghai Jiao Tong University |
Wang, Qian | Shanghai Jiao Tong University |
Keywords: Image segmentation, Image registration, Magnetic resonance imaging (MRI)
Abstract: Brain magnetic resonance (MR) segmentation for hydrocephalus patients is considered as a challenging work. Encoding the variation of the brain anatomical structures from different individuals cannot be easily achieved. The task becomes even more difficult especially when the image data from hydrocephalus patients are considered, which often have large deformations and differ significantly from the normal subjects. Here, we propose a novel strategy with hard and soft attention modules to solve the segmentation problems for hydrocephalus MR images. Our main contributions are three-fold: 1) the hard-attention module generates coarse segmentation map using multi-atlas-based method and the VoxelMorph tool, which guides subsequent segmentation process and improves its robustness; 2) the soft-attention module incorporates position attention to capture precise context information, which further improves the segmentation accuracy; 3) we validate our method by segmenting insula, thalamus and many other regions-of-interests (ROIs) that are critical to quantify brain MR images of hydrocephalus patients in real clinical scenario. The proposed method achieves much improved robustness and accuracy when segmenting all 17 consciousness-related ROIs with high variations for different subjects. To the best of our knowledge, this is the first work to employ deep learning for solving the brain segmentation problems of hydrocephalus patients.
|
|
SuPbPo-05 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Machine Learning for Brain Studies I |
|
|
Chair: Aviyente, Selin | Michigan State University |
Co-Chair: Bach Cuadra, Meritxell | Universitiy of Lausanne |
|
16:00-17:30, Paper SuPbPo-05.1 | Add to My Program |
Stimulus Speech Decoding from Human Cortex with Generative Adversarial Network Transfer Learning |
|
Wang, Ran | NYU |
Chen, Xupeng | New York University |
Khalilian-Gourtani, Amirhossein | New York University |
Chen, Zhaoxi | New York University |
Yu, Leyao | NYU School of Medicine |
Flinker, Adeen | NYU School of Medicine |
Wang, Yao | Polytechnic Institute of New York University |
Keywords: Machine learning, EEG & MEG, Brain
Abstract: Decoding auditory stimulus from neural activity can enable neuroprosthetics and direct communication with the brain. Some recent studies have shown successful speech decoding from intracranial recording using deep learning models. However, scarcity of training data leads to low quality speech reconstruction which prevents a complete brain-computer-interface (BCI) application. In this work, we propose a transfer learning approach with a pre-trained GAN to disentangle representation and generation layers for decoding. We first pre-train a generator to produce spectrograms from a representation space using a large corpus of natural speech data. With a small amount of paired data containing the stimulus speech and corresponding ECoG signals, we then transfer it to a bigger network with an encoder attached before, which maps the neural signal to the representation space. To further improve the network generalization ability, we introduce a Gaussian prior distribution regularizer on the latent representation during the transfer phase. With at most 150 training samples for each tested subject, we achieve a state-of-the-art decoding performance. By visualizing the attention mask embedded in the encoder, we observe brain dynamics that are consistent with findings from previous studies investigating dynamics in the superior temporal gyrus (STG), pre-central gyrus (motor) and inferior frontal gyrus (IFG). Our findings demonstrate a high reconstruction accuracy using deep learning networks together with the potential to elucidate interactions across different brain regions during a cognitive task.
|
|
16:00-17:30, Paper SuPbPo-05.2 | Add to My Program |
Siamese Verification Framework for Autism Identification During Infancy Using Cortical Path Signature Features |
|
Zhang, Xin | South China University of Technology |
Ding, Xinyao | South China University of Technology |
Wu, Zhengwang | UNC-Chapel Hill |
Xia, Jing | Shandong University |
Ni, Hao | University College London |
Xu, Xiangmin | South China University of Technology |
Liao, Lufan | South China University of Technology |
Wang, Li | UNC-CHAPEL HILL |
Li, Gang | University of North Carolina at Chapel Hill |
Keywords: Pattern recognition and classification, Magnetic resonance imaging (MRI), Brain
Abstract: Autism spectrum disorder (ASD) is a complex neurodevelopmental disability, which is lack of biologic diagnostic markers. Therefore, exploring the ASD Identification directly from brain imaging data has been an important topic. In this work, we propose the Siamese verification model to identify ASD using 6 and 12 months cortical features. Rather than directly classifying a testing subject is ASD or not, we determine whether it has the same or different label with the reference subject who has been successfully diagnosed. Then, based on the comparison to all the reference subjects, we can predict the label of the testing subject. The advantage of modeling the classification problem as a verification framework is that it can greatly enlarge the training data size and enable us to train a more accurate and reliable model in an end-to-end manner. In addition, to further improve the classification performance, we introduce the path signature (PS) features, which can capture the dynamic longitudinal information of the brain development for the ASD Identification. Experiments showed that our proposed method reaches the best result, i.e., 87% accuracy, 83% sensitivity and 90% specificity comparing to the state-of-the-art methods.
|
|
16:00-17:30, Paper SuPbPo-05.3 | Add to My Program |
BAENET: A Brain Age Estimation Network with 3D Skipping and Outlier Constraint Loss |
|
Qu, Taiping | Jilin University |
Yue, Yangming | Deepwise AI Lab |
Zhang, Qirui | Department of Medical Imaging, Jinling Hospital, Nanjing Univers |
Wang, Cheng | Beijing Deepwise Technology Co.Ltd |
Zhang, Zhiqiang | Nanjing University School of Medicine |
Lu, Guangming | Department of Medical Imaging, Jinling Hospital, Nanjing Univers |
Du, Wei | Jilin University |
Li, Xiuli | Deepwise Inc |
Keywords: Magnetic resonance imaging (MRI), Brain, Machine learning
Abstract: The potential pattern changes in brain micro-structure can be used for the brain development assessment in children and adolescents by MRI scans. In this paper, we propose a highly accurate and efficient end-to-end brain age estimation network (BAENET) on T1-weighted MRI images. On the network, 3D skipping and outlier constraint loss are designed to accommodate deeper network and increase the robustness. Besides, we incorporate the neuroimage domain knowledge into stratified sampling for better generalization ability for datasets with different age distributions, and gender learning for more gender-specific features during modeling. We verify the effectiveness of the proposed method on the public ABIDE2 and ADHD200 benchmark, consisting of 382 and 378 normal children scans respectively. Our BAENET achieves MAE of 1.11 and 1.16, significantly outperforming the best reported methods by 5.1% and 9.4%.
|
|
16:00-17:30, Paper SuPbPo-05.4 | Add to My Program |
LINEAR MIXED MODELS MINIMISE FALSE POSITIVE RATE and ENHANCE PRECISION of MASS UNIVARIATE VERTEX-WISE ANALYSES of GREY-MATTER |
|
Couvy-Duchesne, Baptiste | Institute for Molecular Bioscience, the University of Queensland |
Zhang, Futao | Institute for Molecular Bioscience, the University of Queensland |
Kemper, Kathryn | Institute for Molecular Bioscience, the University of Queensland |
Sidorenko, Julia | Institute for Molecular Bioscience, the University of Queensland |
Wray, Naomi | Institute for Molecular Bioscience, the University of Queensland |
Visscher, Peter | Institute for Molecular Bioscience, the University of Queensland |
Colliot, Olivier | Cnrs Upr640 - Lena |
Yang, Jian | Institute for Molecular Bioscience, the University of Queensland |
Keywords: Data Mining, Brain, Magnetic resonance imaging (MRI)
Abstract: We evaluated the statistical power, family wise error rate (FWER) and precision of several competing methods that perform mass-univariate vertex-wise analyses of grey-matter (thickness and surface area). In particular, we compared several generalised linear models (GLMs, current state of the art) to linear mixed models (LMMs) that have proven superior in genomics. We used phenotypes simulated from real vertex-wise data and a large sample size (N=8,662) which may soon become the norm in neuroimaging. No method ensured a FWER<5% (at a vertex or cluster level) after applying Bonferroni correction for multiple testing. LMMs should be preferred to GLMs as they minimise the false positive rate and yield smaller clusters of associations. Associations on real phenotypes must be interpreted with caution, and replication may be warranted to conclude about an association.
|
|
16:00-17:30, Paper SuPbPo-05.5 | Add to My Program |
Sex Differences in the Brain: Divergent Results from Traditional Machine Learning and Convolutional Networks |
|
Brueggeman, Leo | University of Iowa |
Thomas, Taylor | University of Iowa |
Koomar, Tanner | University of Iowa |
Hoskins, Brady | University of Iowa |
Michaelson, Jacob | University of Iowa |
|
|
16:00-17:30, Paper SuPbPo-05.6 | Add to My Program |
Automatic Labeling of Cortical Sulci Using Spherical Convolutional Neural Networks in a Developmental Cohort |
|
Hao, Lingyan | Vanderbilt University |
Bao, Shunxing | Vanderbilt University |
Tang, Yucheng | Vanderbilt University |
Gao, Riqiang | Vanderbilt University |
Parvathaneni, Prasanna | National Institutes of Health |
Miller, Jacob | University of California Berkeley |
Voorhies, Willa | University of California Berkeley |
Yao, Jewelia | University of California Berkeley |
Bunge, Silvia | University of California Berkeley |
Weiner, Kevin | University of California Berkeley |
Landman, Bennett | Vanderbilt University |
Lyu, Ilwoo | Vanderbilt University |
Keywords: Image segmentation, Machine learning, Brain
Abstract: In this paper, we present the automatic labeling framework for sulci in the human lateral prefrontal cortex (PFC). We adapt an existing spherical U-Net architecture with our recent surface data augmentation technique to improve the sulcal labeling accuracy in a developmental cohort. Specifically, our framework consists of the following key components: (1) augmented geometrical features being generated during cortical surface registration, (2) spherical U-Net architecture to efficiently fit the augmented features, and (3) post-refinement of sulcal labeling by optimizing spatial coherence via a graph cut technique. We validate our method on 30 healthy subjects with manual labeling of sulcal regions within PFC. In the experiments, we demonstrate significantly improved labeling performance (0.7749) in mean Dice overlap compared to that of multi-atlas (0.6410) and standard spherical U-Net (0.7011) approaches, respectively (p < 0.05). Additionally, the proposed method achieves a full set of sulcal labels in 20 seconds in this developmental cohort.
|
|
16:00-17:30, Paper SuPbPo-05.7 | Add to My Program |
A Novel End-To-End Hybrid Network for Alzheimer’s Disease Detection Using 3D CNN and 3D CLSTM |
|
Xia, Zaimin | Shenzhen University |
Yue, Guanghui | Shenzhen University |
Xu, Frank Yanwu | Baidu Online Network Technology (Beijing) Co. Ltd |
Feng, Chiyu | Shenzhen University |
Yang, Mengya | Shenzhen University |
Wang, Tianfu | Shenzhen University |
Lei, Baiying | Shenzhen University |
Keywords: Magnetic resonance imaging (MRI), Brain, Classification
Abstract: Structural magnetic resonance imaging (sMRI) plays an important role in Alzheimer’s disease (AD) detection as it shows morphological changes caused by brain atrophy. Convolutional neural network (CNN) has been successfully used to achieve good performance in accurate diagnosis of AD. However, most existing methods utilized shallow CNN structures due to the small amount of sMRI data, which limits the ability of CNN to learn high-level features. Thus, in this paper, we propose a novel unified CNN framework for AD identification, where both 3D CNN and 3D convolutional long short-term memory (3D CLSTM) are employed. Specifically, we firstly exploit a 6-layer 3D CNN to learn informative features, then 3D CLSTM is leveraged to further extract the channel-wise higher-level information. Extensive experimental results on ADNI dataset show that our model has achieved an accuracy of 94.19% for AD detection, which outperforms the state-of-the-art methods and indicates the high effectiveness of our proposed method
|
|
16:00-17:30, Paper SuPbPo-05.8 | Add to My Program |
Brain Age Estimation Using LSTM on Children's Brain MRI |
|
He, Sheng | Boston Children's Hospital, Harvard Medical School |
Gollub, Randy | Massachusetts General Hospital, Harvard Medical Schoo |
Murph, Shawn Norman | Massachusetts General Hospital, Harvard Medical School |
Perez, Juan David | Boston Children's Hospital, Harvard Medical School |
Prabhu, Sanjay | Boston Children's Hospital, Harvard Medical School |
Pienaar, Rudolph | Boston Children's Hospital |
Robertson, Richard L. | Boston Children’s Hospital, Harvard Medical Schoo |
Grant, Patricia Ellen | Boston Children's Hospital, Harvard Medical School |
Ou, Yangming | Boston Children’s Hospital, Harvard Medical School |
Keywords: Magnetic resonance imaging (MRI), Brain, Quantification and estimation
Abstract: Brain age prediction based on children's brain MRI is an important biomarker for brain health and brain development analysis. In this paper, we consider the 3D brain MRI volume as a sequence of 2D images and propose a new framework using the recurrent neural network for brain age estimation. The proposed method is named as 2D-ResNet18+Long short-term memory (LSTM), which consists of four parts: 2D ResNet18 for feature extraction on 2D images, a pooling layer for feature reduction over the sequences, an LSTM layer, and a final regression layer. We apply the proposed method on a public multisite NIH-PD dataset and evaluate generalization on a second multisite dataset, which shows that the proposed 2D-ResNet18+LSTM method provides better results than traditional 3D based neural network for brain age estimation.
|
|
16:00-17:30, Paper SuPbPo-05.9 | Add to My Program |
Multi-Branch Deformable Convolutional Neural Network with Label Distribution Learning for Fetal Brain Age Prediction |
|
Liao, Lufan | South China University of Technology |
Zhang, Xin | South China University of Technology |
Zhao, Fenqiang | University of North Carolina at Chapel Hill |
Lou, Jingjiao | Shandong Normal University |
Wang, Li | UNC-CHAPEL HILL |
Xu, Xiangmin | South China University of Technology |
Zhang, He | Obstetrics and Gynecology Hospital, Fudan University |
Li, Gang | University of North Carolina at Chapel Hill |
Keywords: Pattern recognition and classification, Magnetic resonance imaging (MRI), Brain
Abstract: MRI-based fetal brain age prediction is crucial for fetal brain development analysis and early diagnosis of congenital anomalies. The locations and directions of fetal brain are randomly variable and disturbed by adjacent organs, thus imposing great challenges to the fetal brain age prediction. To address this problem, we propose an effective framework based on a deformable convolutional neural network for fetal brain age prediction. Considering the fact of insufficient data, we introduce label distribution learning (LDL), which is able to deal with the small sample problem. We integrate the LDL information into our end-to-end network. Moreover, to fully utilize the complementary multi-view data of fetal brain MRI stacks, a multi-branch CNN is proposed to aggregate multi-view information. We evaluate our method on a fetal brain MRI dataset with 289 subjects and achieve promising age prediction performance.
|
|
SuPbPo-06 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Segmentation – Methods & Applications I |
|
|
|
16:00-17:30, Paper SuPbPo-06.1 | Add to My Program |
Leveraging Self-Supervised Denoising for Image Segmentation |
|
Prakash, Mangal | MPI-CBG |
Buchholz, Tim-Oliver | CSBD/MPI-CBG |
Lalit, Manan | MPI-CBG |
Tomancak, Pavel | MPI-CBG |
Jug, Florian | MPI-CBG |
Krull, Alexander | MPI-CBG |
Keywords: Image segmentation, Machine learning, Image enhancement/restoration(noise and artifact reduction)
Abstract: Deep learning (DL) has arguably emerged as the method of choice for the detection and segmentation of biological structures in microscopy images. However, DL typically needs copious amounts of annotated training data that is for biomedical problems usually not available and excessively expensive to generate. Additionally, tasks become harder in the presence of noise, requiring even more high-quality training data. Hence, we propose to use denoising networks to improve the performance of other DL-based image segmentation methods. More specifically, we present ideas on how state-of-the-art self-supervised CARE networks can improve cell/nuclei segmentation in microscopy data. Using two state-of-the-art baseline methods, U-Net and StarDist, we show that our ideas consistently improve the quality of resulting segmentations, especially when only limited training data for noisy micrographs are available.
|
|
16:00-17:30, Paper SuPbPo-06.2 | Add to My Program |
Towards Fully Automatic 2d Us to 3d Ct/mr Registration: A Novel Segmentation-Based Strategy |
|
Wei, Wei | University of Magdeburg, Germany |
Rak, Marko | University of Magdeburg |
Alpers, Julian | University of Magdeburg, Germany |
Hansen, Christian | Otto-Von-Guericke-University |
Keywords: Ultrasound, Liver, Image registration
Abstract: 2D-US to 3D-CT/MR registration is a crucial module during minimally invasive ultrasound-guided liver tumor ablations. Many modern registration methods still require manual or semi-automatic slice pose initialization due to insufficient robustness of automatic methods. The state-of-the-art regression networks do not work well for liver 2D US to 3DCT/MR registration because of the tremendous inter-patientvariability of the liver anatomy. To address this unsolved problem, we propose a deep learning network pipeline which– instead of a regression – starts with a classification network to recognize the coarse ultrasound transducer pose followed by a segmentation network to detect the target plane of the US image in the CT/MR volume. The rigid registration result is derived using plane regression. In contrast to the state-of-the-art regression networks, we do not estimate registration parameters from multi-modal images directly, but rather focus on segmenting the target slice plane in the volume. The experiments reveal that this novel registration strategy can identify the initial slice phase in a 3D volume more reliably than the standard regression-based techniques. The proposed method was evaluated with 1035 US images from 52 patients. We achieved angle and distance errors of 12.7±6.2◦ and 4.9±3.1 mm, clearly outperforming state-of-the-art re-gression strategy which results in 37.0±15.6◦ angle error and 19.0±11.6 mm distance error.
|
|
16:00-17:30, Paper SuPbPo-06.3 | Add to My Program |
Deep Learning Framework for Epithelium Density Estimation in Prostate Multi-Parametric Magnetic Resonance Imaging |
|
Kwak, Jin Tae | Sejong University |
To, Nguyen Nhat Minh | Sejong University |
Xu, Sheng | Philips Research North America |
Sankineni, Sandeep | National Cancer Institute, NIH |
Turkbey, Baris | Molecular Imaging Program, NCI, NIH |
Choyke, Peter | National Institutes of Health |
Pinto, Peter | National Institutes of Health |
Wood, Bradford | NIH |
Merino, Maria | NIH |
Moreno, Vanessa | NIH |
Keywords: Prostate, Histopathology imaging (e.g. whole slide imaging), Magnetic resonance imaging (MRI)
Abstract: Multi-parametric magnetic resonance imaging (mpMRI) permits non-invasive visualization and localization of clinically important cancers in the prostate. However, it cannot fully describe tumor heterogeneity and microstructures that are crucial for cancer management and treatment. Herein, we develop a deep learning framework that could predict epithelium density of the prostate in mpMRI. A deep convolutional neural network is built to estimate epithelium density per voxel-basis. Equipped with an advanced design of the neural network and loss function, the proposed method obtained a SSIM of 0.744 and a MAE of 6.448% in a cross-validation. It also outperformed the competing network. The results are promising as a potential tool to analyze tissue characteristics of the prostate in mpMRI.
|
|
16:00-17:30, Paper SuPbPo-06.4 | Add to My Program |
V-Net Light - Parameter-Efficient 3-D Convolutional Neural Network for Prostate MRI Segmentation |
|
Yaniv, Ophir | Tel Aviv University |
Portnoy, Orith | Diagnostic Imaging Unit, Sheba Medical Center |
Talmon, Amit | Diagnostic Imaging Unit, Sheba Medical Center |
Kiryati, Nahum | Tel Aviv University |
Konen, Eli | Diagnostic Imaging Unit, Sheba Medical Center |
Mayer, Arnaldo | Sheba Medical Center |
Keywords: Image segmentation, Prostate, Magnetic resonance imaging (MRI)
Abstract: Prostate MRI segmentation has become an important tool for quantitative estimation of the gland volume during diagnostic imaging. It is also a critical step in the fusion between MRI and transrectal ultrasound (TRUS) for fusion guided biopsy or therapy. 3-D neural networks have demonstrated strong potential for this task, but require substantial computational resources due to their large number of parameters. In this work, we focus on the efficiency of the segmentation network in terms of speed and memory requirements. Specifically, we aim at reaching state-of-the-art results with “smaller” networks, involving significantly fewer parameters, thus making the network easier to train and operate. A novel 3-D network architecture, called V-net Light (VnL) is proposed, based on an efficient 3-D Module called 3-D Light, that minimizes the number of network parameters while maintaining state-of-the-art segmentation results. The proposed method is validated on the PROMISE12 challenge data[1]. The proposed VnL has only 9.1% of V-net's parameters, 3.2% of its floating point operations (FLOPs) and uses only 9.1% of hard-disk storage compared to V-net, yet V-net and VnL has comparable accuracy.
|
|
16:00-17:30, Paper SuPbPo-06.5 | Add to My Program |
Condensed U-Net (CU-Net): An Improved U-Net Architecture for Cell Segmentation Powered by 4x4 Max-Pooling Layers |
|
Akbaş, Cem Emre | Masaryk University |
Kozubek, Michal | Masaryk University |
Keywords: Image segmentation, Machine learning, Microscopy - Light, Confocal, Fluorescence
Abstract: Recently, the U-Net has been the dominant approach in the cell segmentation task in biomedical images due to its success in a wide range of image recognition tasks. However, recent studies did not focus enough on updating the architecture of the U-Net and designing specialized loss functions for bioimage segmentation. We show that the U-Net architecture can achieve more successful results with efficient architectural improvements. We propose a condensed encoder-decoder scheme that employs the 4x4 max-pooling operation and triple convolutional layers. The proposed network architecture is trained using a novel combined loss function specifically designed for bioimage segmentation. On the benchmark datasets from the Cell Tracking Challenge, the experimental results show that the proposed cell segmentation system outperforms the U-Net.
|
|
16:00-17:30, Paper SuPbPo-06.6 | Add to My Program |
AttentionAnatomy: A Unified Framework for Whole-Body Organs at Risk Segmentation Using Multiple Partially Annotated Datasets |
|
Sun, Shanlin | DeepVoxel Inc |
Liu, Yang | University of California Irvine |
Bai, Narisu | DeepVoxel Inc |
Tang, Hao | University of California, Irvine |
Chen, Xuming | Department of Radiation Oncology, Shanghai General Hospital, Sha |
Huang, Qian | Department of Radiation Oncology, Shanghai General Hospital, Sha |
Liu, Yong | Department of Radiation Oncology, Shanghai General Hospital, Sha |
Xie, Xiaohui | University of California, Irvine |
Keywords: Whole-body, Image segmentation, Computed tomography (CT)
Abstract: Organs-at-risk (OAR) delineation in computed tomography (CT) is an important step in Radiation Therapy (RT) planning. Recently, deep learning based methods for OAR delineation have been proposed and applied in clinical practice for separate regions of the human body (head and neck, thorax, and abdomen). However, there are few researches regarding the end-to-end whole-body OARs delineation because the existing datasets are mostly partially or incompletely annotated for such task. In this paper, our proposed end-to-end convolutional neural network model, called AttentionAnatomy, can be jointly trained with three partially annotated datasets, segmenting OARs from whole body. Our main contributions are: 1) an attention module implicitly guided by body region label to modulate the segmentation branch output; 2) a prediction re-calibration operation, exploiting prior information of the input images, to handle partial-annotation(HPA) problem; 3) a new hybrid loss function combining batch Dice loss and spatially balanced focal loss to alleviate the organ size imbalance problem. Experimental results of our proposed framework presented significant improvements in both Sørensen-Dice coefficient (DSC) and 95% Hausdorff distance compared to the baseline model.
|
|
16:00-17:30, Paper SuPbPo-06.7 | Add to My Program |
A Spatially Constrained Deep Convolutional Neural Network for Nerve Fiber Segmentation in Corneal Confocal Microscopic Images Using Inaccurate Annotations |
|
Zhang, Ning | University of British Columbia |
Francis, Susan | The University of Nottingham |
Malik, Rayaz | Weill Cornell Medicine-Qatar |
Chen, Xin | University of Nottingham |
Keywords: Image segmentation, Machine learning, Microscopy - Light, Confocal, Fluorescence
Abstract: Semantic image segmentation is one of the most important tasks in medical image analysis. Most state-of-the-art deep learning methods require a large number of accurately annotated examples for model training. However, accurate annotation is difficult to obtain especially in medical applications. In this paper, we propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation using inaccurately annotated labels for training. In our proposed method, image segmentation is formulated as a graph optimization problem that solved by a DCNN model learning process. The cost function to be optimized consists of a unary term that calculated by cross entropy measurement and a pairwise term that is based on enforcing a local label consistency. The proposed method has been evaluated based on corneal confocal microscopic (CCM) images for nerve fiber segmentation, where accurate annotations are extremely difficult to be obtained. Based on both quantitative result of a synthetic dataset and qualitative assessment of a real dataset, the proposed method has achieved superior performance in producing high quality segmentation results even with inaccurate labels for training.
|
|
SuPbPo-07 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Histopathology I |
|
|
Chair: Achim, Alin | University of Bristol |
Co-Chair: Padfield, Dirk | Google |
|
16:00-17:30, Paper SuPbPo-07.1 | Add to My Program |
SU-Net and DU-Net Fusion for Tumour Segmentation in Histopathology Images |
|
Li, Yilong | Queen Mary University of London |
Xu, Zhaoyang | Queen Mary University of London |
Wang, Yaqi | Hangzhou Dianzi University |
Zhou, Huiyu | University of Leicester |
Zhang, Qianni | Queen Mary University of London |
Keywords: Image segmentation, Machine learning, Histopathology imaging (e.g. whole slide imaging)
Abstract: In this work, a fusion framework is proposed for automatic cancer detection and segmentation in whole-slide histopathology images. The framework includes two parts of fusion, multi-scale fusion, and sub-datasets fusion. For a particular type of cancer, histopathological images often demonstrate large morphological variances, the performance of an individual trained network is usually limited. We develop a fusion model that integrates two types of U-net structures: Shallow U-net (SU-net) and Deep U-net (DU-net), trained with a variety of multiple re-scaled images and different subsets of images, and finally ensemble a unified output. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields (CRFs)cite{CRF}. The proposed model is validated on Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology (ACDC@LungHP) challenge in ISBI2019 and Digestive-System Pathological Detection and Segmentation Challenge 2019(DigestPath 2019) in MICCAI2019. Our method achieves a dice coefficient of 0.7968 in ACDC@LungHP and 0.773 in DigestPath 2019, and the result of ACDC@LungHP challenge is ranked in third place on the board.
|
|
16:00-17:30, Paper SuPbPo-07.2 | Add to My Program |
Histopathologic Cancer Detection by Dense-Attention Network with Incorporation of Prior Knowledge |
|
Liu, Mingyuan | Beihang University |
Yu, Yang | Beihang University |
Liao, Qingcheng | Beihang University |
Zhang, Jicong | Beihang University |
Keywords: Histopathology imaging (e.g. whole slide imaging), Tissue, Classification
Abstract: To identify the cancerous region in histology Whole-Slide Images (WSI), the common approach is to apply a patch-level classifier. Appending surrounding tissues could improve the accuracy of patch-wise classification and maintain consistency of WSI. However, the rule that surrounding tissues play a supporting role rather than a decisive one is difficult to be learned directly by a Convolutional Neural Networks (CNN). In this paper, we propose Dense-Attention Network (DAN) for cancerous patch classification, where the attention mechanism is further developed to incorporate prior knowledge about the surrounding tissue. Moreover, the effectiveness of Data Augmentation in Inference stage (DAI) is further validated. The proposed method is evaluated on the PatchCamelyon dataset, where images with tumor tissues in the center are labeled positive, and those in the outer regions do not influence the label. Compared with other competitive deep-learning methods the proposed method has achieved better performance in terms of AUC.
|
|
16:00-17:30, Paper SuPbPo-07.3 | Add to My Program |
Learning with Less Data Via Weakly Labeled Patch Classification in Digital Pathology |
|
Teh, Eu Wern | University of Guelph |
Taylor, Graham | University of Guelph |
Keywords: Histopathology imaging (e.g. whole slide imaging), Tissue, Machine learning
Abstract: In Digital Pathology (DP), labeled data is generally very scarce due to the requirement that medical experts provide annotations. We address this issue by learning transferable features from weakly labeled data, which are collected from various parts of the body and are organized by non-medical experts. In this paper, we show that features learned from such weakly labeled datasets are indeed transferable and allow us to achieve highly competitive patch classification results on the colorectal cancer (CRC) dataset and the PatchCamelyon (PCam) dataset by using an order of magnitude less labeled data.
|
|
16:00-17:30, Paper SuPbPo-07.4 | Add to My Program |
Cancer Sensitive Cascaded Networks (CSC-Net) for Efficient Histopathology Whole Slide Image Segmentation |
|
Sun, Shujiao | Beihang University |
Yuan, Huining | Beihang University |
Zheng, Yushan | Beihang University |
Zhang, Haopeng | Beihang University |
Hu, Dingyi | Beihang University |
Jiang, Zhiguo | Beihang University |
Keywords: Histopathology imaging (e.g. whole slide imaging), Image segmentation, Computer-aided detection and diagnosis (CAD)
Abstract: Automatic segmentation of histopathological whole slide images (WSIs) is challenging due to the high resolution and large scale. In this paper, we proposed a cascade strategy for fast segmentation of WSIs based on convolutional neural networks. Our segmentation framework consists of two U-Net structures which are trained with samples from different magnifications. Meanwhile, we designed a novel cancer sensitive loss (CSL), which is effective in improving the sensitivity of cancer segmentation of the first network and reducing the false positive rate of the second network. We conducted experiments on ACDC-LungHP dataset and compared our method with 2 state-of-the-art segmentation methods. The experimental results have demonstrated that the proposed method can improve the segmentation accuracy and meanwhile reduce the amount of computation. The dice score coefficient and precision of lung cancer segmentation are 0.694 and 0.947, respectively, which are superior to the compared methods.
|
|
16:00-17:30, Paper SuPbPo-07.5 | Add to My Program |
Weakly-Supervised Deep Stain Decomposition for Multiplex IHC Images |
|
Abousamra, Shahira | Stony Brook University |
Fassler, Danielle | Stony Brook University |
Hou, Le | Stony Brook University |
Zhang, Yuwei | Stony Brook University |
Gupta, Rajarsi | Stony Brook University Department of Biomedical Informatics |
Kurc, Tahsin | Stony Brook University |
Escobar-Hoyos, Luisa | Stony Brook University |
Samaras, Dimitris | Stony Brook University |
Knudsen, Beatrice | Cedars Sinai Medical Center |
Shroyer, Kenneth | Stony Brook Medicine |
Saltz, Joel | Stony Brook |
Chen, Chao | Stony Brook University |
Keywords: Deconvolution, Machine learning, Histopathology imaging (e.g. whole slide imaging)
Abstract: Multiplex immunohistochemistry (mIHC) is an innovative and cost-effective method that simultaneously labels multiple biomarkers in the same tissue section. Current platforms support labeling six or more cell types with different colored stains that can be visualized with brightfield light microscopy. However, analyzing and interpreting multi-colored images comprised of thousands of cells is a challenging task for both pathologists and current image analysis methods. We propose a novel deep learning based method that predicts the concentration of different stains at every pixel of a whole slide image (WSI). Our method incorporates weak annotations as training data: manually placed dots labelling different cell types based on color. We compare our method with other approaches and observe favorable performance on mIHC images.
|
|
16:00-17:30, Paper SuPbPo-07.6 | Add to My Program |
Mitosis Detection under Limited Annotation: A Joint Learning Approach |
|
Pati, Pushpak | IBM Research Zurich |
Foncubierta-Rodríguez, Antonio | IBM Research |
Goksel, Orcun | ETH Zurich |
Gabrani, Maria | IBM Research-Zurich |
Keywords: Histopathology imaging (e.g. whole slide imaging), Breast, Pattern recognition and classification
Abstract: Mitotic counting is a vital prognostic marker of tumor proliferation in breast cancer. Deep learning-based mitotic detection is on par with pathologists, but it requires large labeled data for training. We propose a deep classification framework for enhancing mitosis detection by leveraging class label information, via softmax loss, and spatial distribution information among samples, via distance metric learning. We also investigate strategies towards steadily providing informative samples to boost the learning. The efficacy of the proposed framework is established through evaluation on ICPR 2012 and AMIDA 2013 mitotic data. Our framework significantly improves the detection with small training data and achieves on par or superior performance compared to state-of-the-art methods for using the entire training data.
|
|
16:00-17:30, Paper SuPbPo-07.7 | Add to My Program |
Signet Ring Cells Detection in Histology Images with Similarity Learning |
|
Sun, Yibao | Queen Mary University of London |
Huang, Xingru | Queen Mary University of London |
Lopez Molina, Edgar Giussepi | Queen Mary |
Dong, Le | University |
Zhang, Qianni | Queen Mary University of London |
Keywords: Cells & molecules, Pattern recognition and classification, Histopathology imaging (e.g. whole slide imaging)
Abstract: The detection of signet ring cells in histology images is of great value in clinical practice. However, several reasons such as appearance variations and lack of well-labelled data make it a challenging task. Considering the intrinsic characteristics of signet ring cell images, a dedicated similarity learning network is designed in this paper to help the discovery of distinctive feature representations for ring cells. Specifically, we adapt the region proposal network and add an embedding layer to enable similarity learning for training the model. Experimental results show that similarity learning can strengthen the performance of the state-of-the-art and makes our approach competent for the task of signet ring cell detection.
|
|
SuPbPo-08 Poster Session, Oakdale Foyer Coral Foyer |
Add to My Program |
Optical Microscopy and Analysis I |
|
|
Chair: Meijering, Erik | University of New South Wales |
Co-Chair: Fortun, Denis | CNRS, Université De Strasbourg |
|
16:00-17:30, Paper SuPbPo-08.4 | Add to My Program |
3D Biological Cell Reconstruction with Multi-View Geometry |
|
Lei, Yang | HP Labs |
Shkolnikov, Viktor | HP Labs |
Xin, Daisy | HP Labs |
Keywords: Image reconstruction - analytical & iterative methods, Cells & molecules, Microscopy - Light, Confocal, Fluorescence
Abstract: 3D cell modeling is an important tool for visualizing cellular structures and events, and generating accurate data for further quantitative geometric morphological analyses on cellular structures. Current methods involve highly specialized and expensive setups as well as experts in microscopy and 3D reconstruction to produce time- and work-intensive insight into cellular events. We developed a new system that reconstructs the surface geometry of 3D cellular structures from 2D image sequences in a fast and automatic way. The system rotated cells in a microfluidic device, while their images were captured by a video camera. The multi-view geometry theory was introduced to microscopy imaging to model the imaging system and define the 3D reconstruction as an inverse problem. Finally, we successfully demonstrated the reconstruction of cellular structures in their natural state.
|
|
16:00-17:30, Paper SuPbPo-08.5 | Add to My Program |
Interacting Convolution with Pyramid Structure Network for Automated Segmentation of Cervical Nuclei in Pap Smear Images |
|
Yang, Xiaoqing | University of Science and Technology of China |
Wu, Junmin | University of Science and Technology of China |
Yin, Yan | University of Science and Technology of China |
Keywords: Microscopy - Light, Confocal, Fluorescence, Cervix, Image segmentation
Abstract: Pap smear method which is based on the morphological properties of cell nuclei is used to detect pre-cancerous cells in the uterine cervix. An automated and accurate segmentation of nuclei is essential in detection. In this paper, we propose an Interacting Convolution with Pyramid Structure Network (ICPN), which consists of a sufficient aggregating path that focus on more nucleus contexts and a selecting path that enable nucleus localization. The two paths are built on Interacting Convolutional Modules (ICM) and Internal Pyramid Resolution Complementing Modules (IPRCM) respectively. ICM reciprocally aggregates different details of contexts from two sizes of kernels for capturing distinguishing features of diverse sizes and shapes of nuclei. Meanwhile, IPRCM hierachically complements kinds of resolution features to prevent information loss in encoding precedure. The proposed method shows a Zijdenbos similarity index (ZSI) of 0.972(+/-)0.04 on Herlev dataset compared to the state-of-the-art approach.
|
|
16:00-17:30, Paper SuPbPo-08.6 | Add to My Program |
Stitching Methodology for Whole Slide Low-Cost Robotic Microscope Based on a Smartphone |
|
Ortuño, Juan Enrique | CIBER-BBN, Universidad Politécnica De Madrid |
Lin, Lin | Universidad Politécnica De Madrid; Spotlab SL |
Ortega, Maria del Pilar | Spotlab, Madrid, Spain |
García Villena, Jaime | SpotLab S.L |
Cuadrado Sanchez, Daniel | SpotLab |
Linares, María | Research Institute Hospital 12 De Octubre, Universidad Compluten |
Santos, Andres | Universidad Politecnica Madrid |
Ledesma-Carbayo, Maria J. | Universidad Politécnica De Madrid |
Luengo-Oroz, Miguel Angel | Universidad Politécnica De Madrid |
Keywords: Microscopy - Light, Confocal, Fluorescence, Tissue, Image registration
Abstract: This work is framed within the general objective of helping to reduce the cost of telepathology in developing countries and rural areas with no access to automated whole slide imaging (WSI) scanners. We present an automated software pipeline to the problem of mosaicing images acquired with a smartphone, attached to a portable, low-cost, robotic microscopic scanner fabricated using 3D printing technology. To achieve this goal, we propose a robust and automatic workflow, which solves all necessary steps to obtain a stitched image, covering the area of interest, from a set of initial 2D grid of overlapping images, including vignetting correction, lens distortion correction, registration and blending. Optimized solutions, like Voronoi cells and Laplacian blending strategies, are adapted to the low-cost optics and scanner device, and solve imperfections caused using smartphone camera optics. The presented solution can obtain histopathological virtual slides with diagnostic value using a low-cost portable device.
|
|
16:00-17:30, Paper SuPbPo-08.8 | Add to My Program |
A CNN Framework Based on Line Annotations for Detecting Nematodes in Microscopic Images |
|
Chen, Long | RWTH Aachen University, Aachen, Germany |
Strauch, Martin | RWTH Aachen University |
Daub, Matthias | Julius Kühn Institute: Federal Research Centre for Cultivated Pl |
Jiang, Xiaochen | RWTH Aachen University, Aachen, Germany |
Jansen, Marcus | LemnaTec GmbH, Aachen, Germany |
Luigs, Hans-Georg | LemnaTec GmbH, Aachen, Germany |
Schultz-Kuhlmann, Susanne | LWK Niedersachsen |
Kruessel, Stefan | LWK Niedersachsen |
Merhof, Dorit | RWTH Aachen University |
Keywords: Image segmentation, High-content (high-throughput) screening, Machine learning
Abstract: Plant parasitic nematodes cause damage to crop plants on a global scale. Robust detetection on image data is a prerequisite for monitoring such nematodes, as well as for many biological studies involving the nematode C. elegans, a common model organism. Here, we propose a framework for detecting worm-shaped objects in microscopic images that is based on convolutional neural networks (CNNs). We annotate nematodes with curved lines along the body, which is more suitable for worm-shaped objects than bounding boxes. The trained model predicts worm skeletons and body endpoints. The endpoints serve to untangle the skeletons from which segmentation masks are reconstructed by estimating the body width at each location along the skeleton. With light-weight backbone networks, we achieve 75.85% precision, 73.02% recall on a potato cyst nematode data set and 84.20% precision, 85.63% recall on a public C. elegans data set.
|
|
16:00-17:30, Paper SuPbPo-08.9 | Add to My Program |
Weakly Supervised Multi-Task Learning for Cell Detection and Segmentation |
|
Chamanzar, Alireza | Carnegie Mellon University |
Nie, Yao | Roche Tissue Diagnostics |
Keywords: Histopathology imaging (e.g. whole slide imaging), Single cell & molecule detection, Image segmentation
Abstract: Cell detection and segmentation is fundamental for all downstream analysis of digital pathology images. However, obtaining the pixel-level ground truth for single cell segmentation is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both single cell detection and segmentation using only point labels. This is achieved through the combination of different task orientated point label encoding methods and a multi-task scheduler for training. We apply and validate our algorithm on PMS2 stained colon rectal cancer and tonsil tissue images. Compared to the state-of-the-art, our algorithm shows significant improvement in cell detection and segmentation without increasing the annotation efforts.
|
|
16:00-17:30, Paper SuPbPo-08.10 | Add to My Program |
When Texture Matters: Texture-Focused CNNs Outperform General Data Augmentation and Pretraining in Oral Cancer Detection |
|
Wetzer, Elisabeth | Uppsala University |
Gay, Jo | Uppsala University |
Harlin, Hugo | Umeå University |
Lindblad, Joakim | Uppsala University |
Sladoje, Nataša | Centre for Image Analysis, Uppsala University |
Keywords: Machine learning, Classification, Microscopy - Light, Confocal, Fluorescence
Abstract: Early detection is essential to reduce cancer mortality. Oral cancer could be subject to screening programs (similar as for cervical cancer) by collecting Pap smear samples at any dentist visit. However, manual analysis of the resulting massive amount of data is prohibitively costly. Convolutional neural networks (CNNs) have shown promising results in discriminating between cancerous and non-cancerous cells, which allows efficient automated processing of cancer screening data. We investigate different CNN architectures which explicitly aim to utilize texture information in cytological cancer classification, motivated by studies showing that chromatin texture is among the most important discriminative features for that purpose. Results show that CNN classifiers guided by Local Binary Patterns (LBPs) achieve better performance than general purpose CNNs, even when different levels of general data augmentation, as well as pretraining, are considered.
|
|
16:00-17:30, Paper SuPbPo-08.11 | Add to My Program |
Automated Quantitative Analysis of Microglia in Bright-Field Images of Zebrafish |
|
Geurts, Samuël | Delft University of Technology |
Oosterhof, Nynke | University Medical Center Groningen |
Kuil, Laura | Erasmus MC |
van der Linde, Herma | Erasmus University Medical Center |
van Ham, Tjakko | Erasmus University Medical Center |
Meijering, Erik | University of New South Wales |
Keywords: Computer-aided detection and diagnosis (CAD), Microscopy - Light, Confocal, Fluorescence, Cells & molecules
Abstract: Microglia are known to play important roles in brain development and homeostasis, yet their molecular regulation is still poorly understood. Identification of microglia regulators is facilitated by genetic screening and studying the phenotypic effects in animal models. Zebrafish are ideal for this, as their external development and transparency allow in vivo imaging by bright-field microscopy in the larval stage. However, manual analysis of the images is very labor intensive. Here we present a computational method to automate the analysis. It merges the optical sections into an all-in-focus image to simplify the subsequent steps of segmenting the brain region and detecting the contained microglia for quantification and downstream statistical testing. Evaluation on a fully annotated data set of 50 zebrafish larvae shows that the method performs close to the human expert.
|
|
16:00-17:30, Paper SuPbPo-08.12 | Add to My Program |
Three Dimensional Nuclei Segmentation and Classification of Fluorescence Microscopy Images |
|
Han, Shuo | Purdue University |
Lee, Soonam | Purdue University |
Chen, Alain | Purdue University |
Yang, Changye | Purdue University |
Salama, Paul | Indiana University-Purdue University |
Dunn, Kenneth | Indiana University |
Delp, Edward | Purdue University |
Keywords: Image segmentation, Machine learning, Microscopy - Multi-photon
Abstract: Segmentation and classification of cell nuclei in fluorescence 3D microscopy image volumes are fundamental steps for image analysis. However, accurate cell nuclei segmentation and detection in microscopy image volumes are hampered by poor image quality, crowding of nuclei, and large variation in nuclei size and shape. In this paper, we present an unsupervised volume to volume translation approach adapted from the Recycle-GAN using modified Hausdorff distance loss for synthetically generating nuclei with better shapes. A 3D CNN with a regularization term is used for nuclei segmentation and classification followed by nuclei boundary refinement. Experimental results demonstrate that the proposed method can successfully segment nuclei and identify individual nuclei.
|
|
|