|
|
Last updated on April 15, 2019. This conference program is tentative and subject to change
Technical Program for Wednesday April 10, 2019
|
WeP4O |
Foyer |
Poster Session 4 |
Poster Session |
|
10:30-11:30, Subsession WeP4O-01, Foyer | |
Breast Image Analysis Poster Session, 9 papers |
|
10:30-11:30, Subsession WeP4O-02, Foyer | |
Image Synthesis Poster Session, 6 papers |
|
10:30-11:30, Subsession WeP4O-03, Foyer | |
Image Based Surgery and Treatment Poster Session, 7 papers |
|
10:30-11:30, Subsession WeP4O-04, Foyer | |
Musculoskeletal Image Analysis Poster Session, 9 papers |
|
10:30-11:30, Subsession WeP4O-05, Foyer | |
Lung Image Analysis - Poster Poster Session, 9 papers |
|
10:30-11:30, Subsession WeP4O-06, Foyer | |
Pattern Recognition and Classification - Poster Poster Session, 12 papers |
|
10:30-11:30, Subsession WeP4O-07, Foyer | |
Cancer Imaging and Analysis Poster Session, 8 papers |
|
10:30-11:30, Subsession WeP4O-08, Foyer | |
Reconstruction and Image Quality II (Abstracts) Poster Session, 9 papers |
|
10:30-11:30, Subsession WeP4O-10, Foyer | |
Bioimaging II (Abstracts) Poster Session, 4 papers |
|
10:30-11:30, Subsession WeP4O-11, Foyer | |
Optical Image Analysis (Abstracts) Poster Session, 8 papers |
|
WeP4O-01 Poster Session, Foyer |
Add to My Program |
Breast Image Analysis |
|
|
|
10:30-11:30, Paper WeP4O-01.1 | Add to My Program |
Model Agnostic Saliency for Weakly Supervised Lesion Detection from Breast DCE-MRI |
Maicas Suso, Gabriel | The University of Adelaide |
Snaauw, Gerard | University of Adelaide |
Bradley, Andrew Peter | Queensland University of Technology |
Reid, Ian | University of Adelaide |
Carneiro, Gustavo | University of Adelaide |
Keywords: Computer-aided detection and diagnosis (CAD), Machine learning, Breast
Abstract: There is a heated debate on how to interpret the decisions provided by deep learning models (DLM), where the main approaches rely on the visualization of salient regions to interpret the DLM classification process. However, these approaches generally fail to satisfy three conditions for the problem of lesion detection from medical images: 1) for images with lesions, all salient regions should represent lesions, 2) for images containing no lesions, no salient region should be produced,and 3) lesions are generally small with relatively smooth borders. We propose a new model-agnostic paradigm to interpret DLM classification decisions supported by a novel definition of saliency that incorporates the conditions above. Our model-agnostic 1-class saliency detector (MASD) is tested on weakly supervised breast lesion detection from DCE-MRI, achieving state-of-the-art detection accuracy when compared to current visualization methods.
|
|
10:30-11:30, Paper WeP4O-01.2 | Add to My Program |
Limiting Level of False-Positive Detections in Classification of Microcalcification Clusters in Mammograms |
Sainz de Cea, Maria V. | Illinois Institute of Technology |
Nishikawa, Robert | Department of Radiology, the University of Chicago |
Yang, Yongyi | Illinois Institute of Technology |
Keywords: Computer-aided detection and diagnosis (CAD), Breast, X-ray imaging
Abstract: The presence of false positives (FPs) in detection of clustered microcalcifications (MCs) in mammograms can negatively affect the performance in computer-aided diagnosis (CADx) for breast cancer. The level of FPs is typically controlled in a trade-off with the detection sensitivity by an operating point in detection. However, due to inter-patient variability, the occurrence of FPs can become exceedingly high among some cases when the operating point is set at a reasonable sensitivity level. We propose a strategy for automatically limiting the level of FP detections on a case-by-case basis. We first estimate the number of FPs among the detected MCs in a given lesion, then adjust the operating point if the FP fraction is determined to exceed an allowance level. We demonstrated this strategy on a set of 188 FFDM images. The results show that it can lead a significant improvement in accuracy in differentiating MC lesions as being benign or malignant based on the detected MCs.
|
|
10:30-11:30, Paper WeP4O-01.3 | Add to My Program |
A Mixture of Views Network with Applications to the Classification of Breast Microcalcifications |
Shachor, Yaniv | Bar Ilan Univeristy |
Greenspan, Hayit K. | Tel Aviv University |
Goldberger, Jacob | Bar-Ilan University |
Keywords: Classification, Breast, X-ray imaging
Abstract: In this paper we examine data fusion methods for multi-view data classification. We present a decision concept which explicitly takes into account the input multi-view structure, where for each case there is a different subset of relevant views. The proposed method, which we dub Mixture of Views, is implemented by a special purpose neural network architecture. It is demonstrated on the task of classifying breast microcalcifications as benign or malignant based on several mammography views. The single view decisions are combined by a data-driven decision, according to the relevance of each view in a given case, into a global decision. The method is evaluated on a large multi-view dataset extracted from the standardized digital database for screening mammography (DDSM). The experimental results show that our method outperforms previously suggested fusion methods.
|
|
10:30-11:30, Paper WeP4O-01.4 | Add to My Program |
Analysis of CEDBT and CESM Performance Using a Realistic X-Ray Simulation Platform |
Sanchez de la Rosa, Ruben | TELECOM PARISTECH |
Carton, Ann-Katherine | GE HEALTHCARE |
Milioni de Carvalho, Pablo | GE Healthcare |
Bloch, Isabelle | Télécom ParisTech - CNRS UMR 5141 LTCI |
Muller, Serge | GE Healthcare |
Keywords: Breast, X-ray imaging, Modeling - Image formation
Abstract: Contrast Enhanced Spectral Mammography (CESM) and Contrast Enhanced Digital Breast Tomosynthesis (CEDBT) are multi-energy X-ray imaging techniques involving the injection of a vascular contrast agent. Both techniques provide information on hypervascularization of lesions through contrast uptake. CESM has proved to deliver a better diagnosis of breast cancer than diagnostic mammography. CEDBT is a promising technique which provides 3D information on the contrast uptake distribution. In this paper, new steps in the image acquisition process of a previously presented image acquisition simulation platform are described, including models of scatter, image lag and electronic noise. Using this simulation platform, 290 CESM and CEDBT images were generated. A human observer experiment was then performed to compare lesion detectability and characterization. The results indicate a similar detectability and an improved characterization of shape and contrast enhancement distribution using CEDBT.
|
|
10:30-11:30, Paper WeP4O-01.5 | Add to My Program |
Registration of Breast MRI and 3D Scan Data Based on Surface Matching |
Bessa, Sílvia | Inesc Tec, Fcup |
Carvalho, Pedro Henrique | INESC TEC |
Oliveira, Hélder P. | INESC TEC, Faculdade De Ciências, Universidade Do Porto |
Keywords: Magnetic resonance imaging (MRI), Breast, Image registration
Abstract: The creation of 3D complete models of the woman breast that aggregate radiological and surface information is a crucial step for the development of surgery planning tools in the context of breast cancer. This requires the registration of interior and surface data of the breast, which has to recover large breast deformations caused by the different poses of the patient during data acquisition and has to deal with the lack of landmarks between both modalities, apart from the nipple. In this paper, the registration of Magnetic Resonance Imaging exams and 3D surface data reconstructed from Kinect, acquisitions is explored using a biomechanical modelling of breast pose transformations combined with a free form deformation to finely match the data. The results are promising, with an average euclidean distance between the matched data of 0.81 +/- 0.09 mm being achieved.
|
|
10:30-11:30, Paper WeP4O-01.6 | Add to My Program |
AttentionNet: Learning Where to Focus Via Attention Mechanism for Anatomical Segmentation of Whole Breast Ultrasound Images |
Li, Hang | School of Biomedical Engineering |
Cheng, Jie-Zhi | Shenzhen University |
Chou, Yi-Hong | Division of Ultrasound and Breast Imaging, Department of Radiolo |
Qin, Jing | Center for Smart Health, School of Nursing, the Hong Kong Polyte |
Huang, Shan | Shenzhen University |
Lei, Baiying | Shenzhen University |
Keywords: Ultrasound, Breast, Image segmentation
Abstract: The main challenges of the anatomical segmentation of automated whole breast ultrasound (AWBUS) image are shadow effect, blurred boundary, low contrast and large target. To tackle them, a novel and effective framework named AttentionNet is developed via self-attention mechanism during both feature extraction and up-sampling phase. Specifically, features are firstly extracted based on ResNeXt-50 to explore the information of intra-channels. With the goal of extracting features and utilizing channel information effectively, a module named spatial attention refinement (SAR) is devised using the basic ResNeXt-50 module (a.k.a., ResNeXt-SAR). Then, a weighted up-sampling block (WUB) module for precise pixel localization is designed by introducing high-level semantic concept during up-sampling phase, playing an important role in guiding the low-level features by the category information. The extensive experiments are conducted on AWBUS image for multi-class image segmentation. Our proposed AttentionNet achieves the superior results over the state-of-the-art approaches and may help to assist the calculation of breast density.
|
|
10:30-11:30, Paper WeP4O-01.7 | Add to My Program |
Deep Keypoint Detection for the Aesthetic Evaluation of Breast Cancer Surgery Outcomes |
dos Santos Silva, Wilson José | Inesc Tec / Feup |
Meca Castro, Eduardo | INESC TEC |
Cardoso, Maria João | Porto Faculty of Medicine |
Fitzal, Florian | Department of Surgery, Medical University, Vienna |
Cardoso, Jaime S. | INESC TEC and University of Porto |
Keywords: Machine learning, Breast
Abstract: Breast cancer high survival rate led to an increased interest in the quality of life after treatment, particularly regarding the aesthetic outcome. Currently used aesthetic assessment methods are subjective, which make reproducibility and impartiality impossible. To create an objective method capable of being selected as the gold standard, it is fundamental to detect, in a completely automatic manner, keypoints in photographs of women's torso after being subjected to breast cancer surgeries. This paper proposes a deep and a hybrid model to detect keypoints with high accuracy. Our methods are tested on two datasets, one composed of images with a clean and consistent background and a second one that contains photographs taken under poor lighting and background conditions. The proposed methods represent an improvement in the detection of endpoints, nipples and breast contour for both datasets in terms of average error distance when compared with the current state-of-the-art.
|
|
10:30-11:30, Paper WeP4O-01.8 | Add to My Program |
Breast Density Quantification Using Weakly Annotated Dataset |
Tardy, Mickael | LS2N, Ecole Centrale De Nantes, Hera-MI SAS |
Scheffer, Bruno | Hera-MI SAS, Institut De Cancerologie De l'Ouest, Nantes |
Mateus, Diana | Centrale Nantes |
Keywords: Breast, X-ray imaging, Quantification and estimation
Abstract: Breast density is known to be an efficient biomarker for cancer risk, and of particular interest in early breast cancer detection, when masses are not yet visible. The quantification of the breast density is difficult due to limitations of mammography imaging, as well as to the ambiguities in defining the limits of the relevant regions. Though inherently a regression task, breast density quantification has been typically approached as a rough classification problem. In this paper, we model the problem of breast density evaluation as an image-wise regression task that seeks to quantify the percentage of fibroglandular tissue. We propose a deep learning method offering a clinically acceptable estimate with low requirements on expert annotations. We also discuss the use of the X-ray acquisition parameters as additional input to the neural network. Our best performing model yields an optimistic mean absolute error around 6.0% of breast density.
|
|
10:30-11:30, Paper WeP4O-01.9 | Add to My Program |
Multi-Level Batch Normalization in Deep Networks for Invasive Ductal Carcinoma Cell Discrimination in Histopathology Images |
Perdigon Romero, Francisco | Federal University of Amazonas |
Tang, An | Radiology, Hopital Saint-Luc, Universite De Montreal |
Kadoury, Samuel | Polytechnique Montreal |
Keywords: Computer-aided detection and diagnosis (CAD), Histopathology imaging (e.g. whole slide imaging), Breast
Abstract: Breast cancer is the most diagnosed cancer and the most predominant cause of death in women worldwide. Imaging techniques such as breast cancer pathology helps in the diagnosis and monitoring of the disease. However identification of malignant cells can be challenging given the high heterogeneity in tissue absorption from staining agents. In this work, we present a novel approach for Invasive Ductal Carcinoma (IDC) cells discrimination in histopathology slides. We propose a model derived from the Inception architecture, proposing a multi-level batch normalization module between each convolutional steps. This module was used as a base block for feature extraction in a CNN architecture. We used the open IDC dataset in which we obtained a balanced accuracy of 0.89 and an F1 score of 0.90, thus surpassing recent state of the art classification algorithms tested on this public dataset.
|
|
WeP4O-02 Poster Session, Foyer |
Add to My Program |
Image Synthesis |
|
|
|
10:30-11:30, Paper WeP4O-02.1 | Add to My Program |
Unpaired MR to CT Synthesis with Explicit Structural Constrained Adversarial Learning |
Ge, Yunhao | Shanghai Jiao Tong University |
Wei, Dongming | Shanghai Jiaotong University |
Xue, Zhong | United Imaging Intelligence Inc |
Wang, Qian | Shanghai Jiao Tong University |
Zhou, Xiang | Siemens |
Zhan, Yiqiang | Siemens Healthcare |
Liao, Shu | United Imaging Intelligence |
Keywords: Image synthesis, Machine learning, Whole-body
Abstract: In medical imaging such as PET-MR attenuation correction and MRI-guided radiation therapy, synthesizing CT images from MR images plays an important role in obtaining tissue density properties. Recently deep-learning-based image synthesis techniques have attracted much attention because of their superior ability for image mapping and faster speed than traditional models. However, most of the current deep-learning-based synthesis methods require large scales of paired data, which greatly limits their usage as in some situation strictly registered image pair is infeasible to obtain. Efforts have been made to relax such a restriction, and the cycle-consistent adversarial networks (Cycle-GAN) is an example to synthesize medical images with unpaired data for training. In Cycle-GAN, the cycle consistency loss is employed as an indirect structural similarity metric between the input and the synthesized images and often leads to mismatch of anatomical structures in the synthesized results. To overcome this shortcoming, we propose to (1) use the mutual information loss to directly enforce the structural similarity between the input MR and the synthesized CT image and (2) to incorporate the shape consistency information to improve the synthesis result. Experimental results demonstrate that the proposed method can achieve better performance both qualitatively and quantitatively for whole-body MR to CT synthesis with unpaired training images compared to Cycle-GAN.
|
|
10:30-11:30, Paper WeP4O-02.2 | Add to My Program |
Improving Skin Lesion Segmentation Via Stacked Adversarial Learning |
Bi, Lei | University of Sydney |
Feng, Dagan | The University of Sydney |
Fulham, Michael | Royal Prince Alfred Hospital |
Kim, Jinman | University of Sydney |
Keywords: Skin, Image synthesis, Other-modality
Abstract: Segmentation of skin lesions is an essential step in computer aided diagnosis (CAD) for the automated melanoma diagnosis. Recently, segmentation methods based on fully convolutional networks (FCNs) have achieved great success for general images. This success is primarily related to FCNs leveraging large labelled datasets to learn features that correspond to the shallow appearance and the deep semantics of the images. Such large labelled datasets, however, are usually not available for medical images. So researchers have used specific cost functions and post-processing algorithms to refine the coarse boundaries of the results to improve the FCN performance in skin lesion segmentation. These methods are heavily reliant on tuning many parameters and post-processing techniques. In this paper, we adopt the generative adversarial networks (GANs) given their inherent ability to produce consistent and realistic image features by using deep neural networks and adversarial learning concepts. We build upon the GAN with a novel stacked adversarial learning architecture such that skin lesion features can be learned, iteratively, in a class-specific manner. The outputs from our method are then added to the existing FCN training data, thus increasing the overall feature diversity. We evaluated our method on the ISIC 2017 skin lesion segmentation challenge dataset; we show that it is more accurate and robust when compared to the existing skin state-of-the-art methods.
|
|
10:30-11:30, Paper WeP4O-02.3 | Add to My Program |
Refacing: Reconstructing Anonymized Facial Features Using GANs |
Abramian, David | Linköping University |
Eklund, Anders | Linköping University |
Keywords: Image synthesis, Magnetic resonance imaging (MRI)
Abstract: Anonymization of medical images is necessary for protecting the identity of the test subjects, and is therefore an essential step in data sharing. However, recent developments in deep learning may raise the bar on the amount of distortion that needs to be applied to guarantee anonymity. To test such possibilities, we have applied the novel CycleGAN unsupervised image-to-image translation framework on sagittal slices of T1 MR images, in order to reconstruct facial features from anonymized data. We applied the CycleGAN framework on both face-blurred and face-removed images. Our results show that face blurring may not provide adequate protection against malicious attempts at identifying the subjects, while face removal provides more robust anonymization, but is still partially reversible.
|
|
10:30-11:30, Paper WeP4O-02.4 | Add to My Program |
Pseudo-Ct Generation for Mri-Only Radiotherapy: Comparative Study between a Generative Adversarial Network, a U-Net Network, a Patch-Based, and an Atlas Based Methods |
Largent, Axel | Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099 |
Nunes, Jean Claude | Université De Rennes |
Saint-Jalmes, Hervé | Université De Rennes 1, LTSI, U1099 INSERM |
Baxter, John | Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099 |
Greer, Peter | School of Mathematical and Physical Sciences, University of Newc |
Dowling, Jason | CSRIO |
Acosta, Oscar | Univ. of Rennes 1 |
De Crevoisier, Renaud | INSERM, U1099, Rennes, F-35000, France - Université De Rennes 1, |
Keywords: Image synthesis, Machine learning, Magnetic resonance imaging (MRI)
Abstract: As new radiotherapy treatment systems using MRI (rather than traditional CT) are being developed, the accurate calculation of dose maps from MR imaging has become an increasing concern. MRI provides good soft-tissue but, unlike CT, lacks the electron density information necessary for dose calculation. In this paper, we proposed a generative adversarial network (GAN) using a perceptual loss to generate pseudo-CTs for prostate MRI dose calculation. This network was evaluated and compared to a U-Net network, a patch-based (PBM) and an atlas-based methods (ABM). Influence of the perceptual loss was assessed by comparing this network to a GAN using a L2 loss. GANs and U-Nets are rather similar with slightly better results for GANs. The proposed GAN outperformed the PBM by 9% and the ABM by 13% in term of MAE in whole pelvis. This method could be used for online dose calculation in MRI-only radiotherapy.
|
|
10:30-11:30, Paper WeP4O-02.5 | Add to My Program |
Radiomic Synthesis Using Deep Convolutional Neural Networks |
Parekh, Vishwa | Johns Hopkins University |
Jacobs, Michael A. | The Johns Hopkins Univeristy School of Medicine |
Keywords: Visualization, Image synthesis, image filtering (e.g. mathematical morphology, wavelets,...)
Abstract: Radiomics is a rapidly growing field that deals with modeling the textural information present in the different tissues of interest for clinical decision support. However, the process of generating radiomic images is computationally very expensive and could take substantial time per radiological image for certain higher order features, such as, gray-level co-occurrence matrix(GLCM), even with high-end GPUs. To that end, we developed RadSynth, a deep convolutional neural network(CNN) model, to efficiently generate radiomic images. RadSynth was tested on a breast cancer patient cohort of twenty-four patients(ten benign, ten malignant and four normal) for computation of GLCM entropy images from post-contrast DCE-MRI. RadSynth produced excellent synthetic entropy images compared to traditional GLCM entropy images. The average percentage difference and correlation between the two techniques were 0.07±0.06 and 0.97, respectively. In conclusion, RadSynth presents a new powerful tool for fast computation and visualization of the textural information present in the radiological images.
|
|
10:30-11:30, Paper WeP4O-02.6 | Add to My Program |
Multi-Focus Ultrasound Imaging Using Generative Adversarial Networks |
Goudarzi, Sobhan | Concordia University |
Asif, Amir | Concordia University |
Rivaz, Hassan | Concordia University |
Keywords: Ultrasound, Machine learning, Tissue
Abstract: Ultrasound (US) beam can be focused at multiple locations to increase the lateral resolution of the resulting images. However, this improvement in resolution comes at the expense of a loss in frame rate, which is essential in many applications such as imaging moving anatomy. Herein, we propose a novel method based on Generative Adversarial Network (GAN) for achieving multi-focus line-per-line US image without a reduction in the frame rate. Results on simulated phantoms as well as real phantom experiments show that the proposed deep learning framework is able to substantially improve the resolution without sacrificing the frame rate.
|
|
WeP4O-03 Poster Session, Foyer |
Add to My Program |
Image Based Surgery and Treatment |
|
|
|
10:30-11:30, Paper WeP4O-03.1 | Add to My Program |
Improving Catheter Segmentation & Localization in 3D Cardiac Ultrasound Using Direction-Fused FCN |
Yang, Hongxu | Technische Universiteit Eindhoven |
Shan, Caifeng | Philips Research |
Kolen, Alex | Philips Research |
de With, Peter | Eindhoven University of Technology |
Keywords: Ultrasound, Heart, Image-guided treatment
Abstract: Fast and accurate catheter detection in cardiac catheterization using harmless 3D ultrasound (US) can improve the efficiency and outcome of the intervention. However, the low image quality of US requires extra training for sonographers to localize the catheter. In this paper, we propose a catheter detection method based on a pre-trained VGG network, which exploits 3D information through re-organized cross-sections to segment the catheter by a shared fully convolutional network (FCN), which is called a Direction-Fused FCN (DF-FCN). Based on the segmented image of DF-FCN, the catheter can be localized by model fitting. Our experiments show that the proposed method can successfully detect an ablation catheter in a challenging ex-vivo 3D US dataset, which was collected on the porcine heart. Extensive analysis shows that the proposed method achieves a Dice score of 57.7%, which offers at least an 11.8% improvement when compared to state-of-the-art instrument detection methods. Due to the improved segmentation performance by the DF-FCN, the catheter can be localized with an error of only 1.4 mm.
|
|
10:30-11:30, Paper WeP4O-03.2 | Add to My Program |
Deep Learning Biopsy Marking of Early Neoplasia in Barrett's Esophagus by Combining WLE and BLI Modalities |
van der Putten, Joost | Eindhoven University of Technology |
Wildeboer, Rogier | Eindhoven University of Technology |
de Groof, Jeroen | Amsterdam University Medical Center |
van Sloun, Ruud | Eindhoven University of Technology |
Struyvenberg, Maarten | Amsterdam University Medical Center |
van der Sommen, Fons | Eindhoven University of Technology |
Zinger, Svitlana | Eindhoven University of Technology |
Curvers, Wouter | Amsterdam University Medical Center |
Schoon, Erik | Catharina Hospital |
Bergman, Jaqcues | Amsterdam University Medical Center |
de With, Peter | Eindhoven University of Technology |
Keywords: Image-guided treatment, Multi-modality fusion, Endoscopy
Abstract: Esophageal cancer is the fastest rising type of cancer in the western world. Also, early neoplasia in Barrett’s esophagus (BE) is difficult to detect for endoscopists and research has shown it is similarly complicated for Computer-Aided Detection (CAD) algorithms. For these reasons, further development of CAD algorithms for BE is essential for the wellbeing of patients. In this work we propose a patch-based deep learning algorithm for early neoplasia in BE, utilizing state-of-the-art deep learning techniques on a new prospective data set. The new algorithm yields not only a high detection score but also identifies the ideal biopsy location for the first time. We define specific novel metrics such as sweet-spot flag and soft-spot flag, to obtain well-defined computation of the biopsy location. Furthermore, we show that combining white light and blue laser imaging improves localization results by 8%.
|
|
10:30-11:30, Paper WeP4O-03.3 | Add to My Program |
Fast Registration for Liver Motion Compensation in Ultrasound-Guided Navigation |
Wei, Wei | University of Magdeburg, Germany |
Xu, Haishan | Sir Run Run Shaw Hospital, School of Medicine, Zhejiang Universi |
Alpers, Julian | University of Magdeburg, Germany |
Zhang, Tianbao | University of Magdeburg, Germany |
Rak, Marko | University of Magdeburg |
Wang, Lei | University of Magdeburg, Germany |
Hansen, Christian | Otto-Von-Guericke-University |
Keywords: Surgical guidance/navigation, Liver, Ultrasound
Abstract: In recent years, image-guided thermal ablations have become a considerable treatment method for cancer patients, including support through navigational systems. One of the most critical challenges in these systems is the registration between the intraoperative images and the preoperative volume. The motion secondary to inspiration makes registration even more difficult. In this work, we propose a coarse-fine fast patient registration technique to solve the problem of motion compensation. In contrast to other state-of-the-art methods, we focus on improving the convergence range of registration. To this end, we make use of a Deep Learning 2D U-Net framework to extract the vessels and liver borders from intraoperative ultrasound images and employ the segmentation results as regions of interest in the registration. After an initial 3D-3D registration during breath hold, the following motion compensation is achieved using a 2D-3D registration. Our approach yields a convergence rate of over 70% with an accuracy of 1.97 ± 1.07 mm regarding the target registration error. The 2D-3D registration is GPU-accelerated with a time cost of less than 200 ms.
|
|
10:30-11:30, Paper WeP4O-03.4 | Add to My Program |
Simulation of a Modified Multielement Random Phased Array for Image Guidance and Therapy |
Zubair, Muhammad | Imperial College London |
Dickinson, Robert | Imperial College London |
Keywords: Ultrasound, Image-guided treatment
Abstract: Random phased arrays have been shown to produce single and multiple foci to ablate tumors deep in the body with out damaging the intervening tissues. Simulations are performed to optimize element distribution of a random phased array to maximize its performance by minimizing grating and side lobes, increasing intensity at the focus and maximizing the steering range. Field distributions are calculated both in lateral and axial planes. Multiple simultaneous foci are generated and steered in 3D volume. It was observed that the therapeutic operating field is up to 4 cm in lateral and 6 cm in axial directions. Grating lobes having magnitude less than 15% of the main lobe appear when a single focus is steered more than 15mm away from the axis. Improvements in terms of peak pressure amplitude and reduced sidelobes have been observed. Synthetic aperture beamforming was used to reconstruct images of interest to demonstrate that the modified array is suitable for image guidance having good spatial resolution near the geometric focus.
|
|
10:30-11:30, Paper WeP4O-03.5 | Add to My Program |
A Deep-Learning-Based Method for the Localization of Cochlear Implant Electrodes in Ct Images |
Chi, Yujie | Tsinghua University |
Wang, Jianing | Vanderbilt University |
Zhao, Yiyuan | Siemens Healthineers |
Noble, Jack | Vanderbilt University |
Dawant, Benoit | Vanderbilt University |
Keywords: Machine learning, Inner ear, Computed tomography (CT)
Abstract: Accurate localization of contacts on cochlear implant (CI) electrode arrays (EAs) in post-implantation CTs (Post-CTs) of CI recipients is important for assisting audiologists in customizing CI settings. We propose a two-step method to localize CI contacts in Post-CTs when the resolution of the images permits distinguishing individual contacts. Given a Post-CT, we first use conditional generative adversarial networks (cGANs) to generate an image in which voxel values are proportional to the distance to the nearest candidate contact. We refer to this image as the likelihood map. This is followed by a post-processing method applied to the likelihood map to estimate the accurate location of each individual contact. The method has been evaluated on 30 Post-CTs implanted with 17 contacts EAs manufactured by Advanced Bionics Corporation. It localized all contacts in 29 cases and achieved a median localization error of 0.12 mm for the successful cases, which is comparable to what is achieved with a state-of-the-art method that requires sets of carefully designed EA-specific features and parameters.
|
|
10:30-11:30, Paper WeP4O-03.6 | Add to My Program |
Surgical Illuminant Design for Enhancement of Organ Microstructure |
Kurabuchi, Yoko | Chiba University |
Nakano, Kazuya | Chiba University |
Ohnishi, Takashi | Chiba University |
Nakaguchi, Toshiya | Chiba University |
Haneishi, Hideaki | Chiba University |
Keywords: Visualization, Abdomen, Optimization method
Abstract: In the medical field, visual diagnosis is one of the most important ways for evaluation. Observing tissue structure is effective for improving precision of surgery. We focused on an emphatic illuminant which brought fine structures of micro blood vessels. In this paper, we simulated the illuminant and evaluated it by subjective evaluation. In an evaluation experiment, to compare two illuminant conditions, a conventional and the emphatic illuminant, 14 LEDs fixed to the light unit were spectrally adjusted to demonstrate the two illuminants. We set a rat cecum as a target to observe the structure of micro blood vessels. The effectiveness of the emphatic illuminant was confirmed by ratio of detected blood vessel region to the ground truth.
|
|
10:30-11:30, Paper WeP4O-03.7 | Add to My Program |
Prone to Supine Surface Based Registration Workflow for Breast Tumor Localization in Surgical Planning |
Alfano, Felicia | Universidad Politecnica De Madrid |
Ortuño, Juan Enrique | CIBER-BBN, Universidad Politécnica De Madrid |
Herrero Conde, Mercedes | Unidad De Mama, Hospital De Madrid Sanchinarro, Madrid, España |
Bueno Zamora, Oscar | Instituto De Investigación Sanitaria Gregorio Marañón |
Lizarraga, Santiago | Instituto De Investigación Sanitaria Gregorio Marañón |
Santos, Andres | Universidad Politecnica Madrid |
Pascau, Javier | Hospital General Universitario Gregorio Marañón |
Ledesma-Carbayo, Maria J. | Universidad Politécnica De Madrid |
Keywords: Breast, Surgical guidance/navigation, Image-guided treatment
Abstract: Breast cancer is the most frequent cancer in women worldwide. Screening programs and imaging improvements have increased the detection of clinically occult nonpalpable lesions requiring preoperative localization. Imageguided wire localization (WGL) is the current standard of care for the excision of non-palpable carcinomas during breast conserving surgery (BCS). Due to the current limitations of intraoperative tumor localization approaches, the integration of the information from multimodal imaging may be especially relevant in surgical planning. This work presents a workflow to perform a prone image-to-surgical physical data alignment in order to determine the correspondence between the tumor identified in the preoperative image and the final position of the tumor in the surgical position. The evaluation of the methodology has been carried out in 18 cases achieving an average localization error of 10.40 mm and 9.84 m
|
|
WeP4O-04 Poster Session, Foyer |
Add to My Program |
Musculoskeletal Image Analysis |
|
|
|
10:30-11:30, Paper WeP4O-04.1 | Add to My Program |
Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks |
Oyarzun Laura, Cristina | Fraunhofer IGD |
Hofmann, Patrick | Fraunhofer IGD |
Drechsler, Klaus | Aachen University of Applied Sciences |
Wesarg, Stefan | Fraunhofer IGD |
Keywords: Pattern recognition and classification, Machine learning, Bone
Abstract: The nasal cavity and paranasal sinuses present large interpatient variabilities. Additional circumstances like for example, concha bullosa or nasal septum deviations complicate their segmentation. As in other areas of the body a previous multi-structure detection could facilitate the segmentation task. In this paper an approach is proposed to individually detect all sinuses and the nasal cavity. For a better delimitation of their borders the use of an irregular polyhedron is proposed. For an accurate prediction the Darknet-19 deep neural network is used which combined with the You Only Look Once method has shown very promising results in other fields of computer vision. 57 CT scans were available of which 85% were used for training and the remaining 15% for validation.
|
|
10:30-11:30, Paper WeP4O-04.2 | Add to My Program |
Residual Attention Based Network for Hand Bone Age Assessment |
Wu, Eric | Cornell University |
Kong, Bin | University of North Carolina at Charlotte |
Wang, Xin | CuraCloud Corporation |
Bai, Junjie | University of Iowa |
Lu, Yi | CuraCloud Corporation |
Gao, Feng | CuraCloud Corporation |
Zhang, Shaoting | UNC Charlotte |
Cao, Kunlin | GE Global Research |
Song, Qi | General Electric |
Lyu, Siwei | University at Albany, State University of New York |
Yin, Youbing | CuraCloud Corporation |
Keywords: Bone, X-ray imaging, Machine learning
Abstract: Computerized automatic methods have been employed to boost the productivity as well as objectiveness of hand bone age assessment. These approaches make predictions according to the whole X-ray images, which include other objects that may introduce distractions. Instead, our framework is inspired by the clinical workflow (Tanner-Whitehouse) of hand bone age assessment, which focuses on the key components of the hand. The proposed framework is composed of two components: a Mask R-CNN subnet of pixelwise hand segmentation and a residual attention network for hand bone age assessment. The Mask R-CNN subnet segments the hands from X-ray images to avoid the distractions of other objects (e.g., X-ray tags). The hierarchical attention components of the residual attention subnet force our network to focus on the key components of the X-ray images and generate the final predictions as well as the associated visual supports, which is similar to the assessment procedure of clinicians. We evaluate the performance of the proposed pipeline on the RSNA pediatric bone age dataset and the results demonstrate its superiority over the previous methods.
|
|
10:30-11:30, Paper WeP4O-04.3 | Add to My Program |
A Fully Automatic 3D Reconstruction of Scoliotic Spine from Biplanar Radiographs in a Suspension Framework |
Bakhous, Christine | Ecole De Technologie Supérieure |
Vazquez, Carlos | École De Technologie Supérieure |
Cresson, Thierry | Ecole De Technologie Superieure |
Parent, Stefan | University of Montreal |
de Guise, Jacques A. | École De Technologie Supérieure |
Keywords: Machine learning, Spine, X-ray imaging
Abstract: The spine flexibility is an essential information for surgical planning of scoliotic patients. It can be computed using a suspension framework where the patient is raised through a harness and the spine stretched using his weight. To this end, two spine’s 3D reconstructions are obtained using biplanar radiographs in standing and suspension positions in order to compute the flexibility by comparing both positions clinical parameters. This process automation requires to automate the 3D reconstruction process. In contrast to previous works, that usually deal with the 3D reconstruction in standing position, this paper focus on the 3D reconstruction in suspension position by including, as a prior information, the 3D reconstruction in standing position. The proposed method was validated on 57 patients with adolescent idiopathic scoliosis and showed improvement on vertebrae positions with respect to literature work. The mean (std) 3D error’s norm decreased from 8.31 (9.02) mm to 3.89 (3.88) mm.
|
|
10:30-11:30, Paper WeP4O-04.4 | Add to My Program |
Deep Learning with Anatomical Priors: Imitating Enhanced Autoencoders in Latent Space for Improved Pelvic Bone Segmentation in MRI |
Pham, Duc Duy | University of Duisburg-Essen |
Dovletov, Gurbandurdy | University of Duisburg-Essen |
Warwas, Sebastian | University Hospital Essen, University of Duisburg-Essen |
Landgraeber, Stefan | University Hospital Essen, University of Duisburg-Essen |
Jäger, Marcus | University Hospital Essen, University of Duisburg-Essen |
Pauli, Josef | Duisburg-Essen, Intelligente Systeme |
Keywords: Image segmentation, Machine learning, Bone
Abstract: We propose a 2D Encoder-Decoder based deep learning architecture for semantic segmentation, that incorporates anatomical priors by imitating the encoder component of an autoencoder in latent space. The autoencoder is additionally enhanced by means of hierarchical features, extracted by an U-Net module. Our suggested architecture is trained in an end-to-end manner and is evaluated on the example of pelvic bone segmentation in MRI. A comparison to the standard U-Net architecture shows promising improvements.
|
|
10:30-11:30, Paper WeP4O-04.5 | Add to My Program |
Human Knee Phantom for Spectral CT: Validation of a Material Decomposition Algorithm |
Bussod, Suzanne | Univ. Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM Sai |
Perez Juste Abascal, Juan Felipe | Univ. Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Sai |
Ducros, Nicolas | INSA Lyon, CREATIS |
Olivier, Cécile | CREATIS |
Si-Mohamed, Salim | Univ. Lyon, INSA-Lyon, Universit{'e} Claude Bernard Lyon 1, UJM |
Douek, Philippe | CREATIS-LRMN, Hospices Civils De Lyon |
Chappard, Christine | Inserm |
Peyrin, Francoise | Université De Lyon, CNRS UMR 5220, INSERM U1206, INSA Lyon |
Keywords: Computed tomography (CT), Bone, Image acquisition
Abstract: Osteoarthritis is the most common degenerative joint disease. Spectral computed tomography generates energy-resolved data which enable identification of materials within the sample and offer improved soft tissue contrast compared to conventional X-ray CT. In this work, we propose a realistic numerical phantom of a knee to assess the feasibility of spectral CT for osteoarthritis. The phantom is created from experimental synchrotron CT mono-energetic images. After simulating spectral CT data, we perform material decomposition using Gauss-Newton method, for different noise levels. Then, we reconstruct virtual mono-energetic images. We compare decompositions and mono-energetic images with the phantom using mean-squared error. When performing material decomposition and tomographic reconstruction, we obtain less than 1 % error for both, using noisy data. Moreover, it is possible to see cartilage with naked eye on virtual mono-energetic images. This phantom has great potential to assess the feasibility and current limitations of spectral CT to characterize knee osteoarthritis.
|
|
10:30-11:30, Paper WeP4O-04.6 | Add to My Program |
Detection and Identification of Lower-Limb Bones in Biplanar X-Ray Images with Arbitrary Field of View and Various Patient Orientations |
Olory Agomma, Roseline | École De Technologie Supérieure, Laboratoire De Recherche En Ima |
Vazquez, Carlos | École De Technologie Supérieure |
Cresson, Thierry | Ecole De Technologie Superieure |
de Guise, Jacques A. | École De Technologie Supérieure |
Keywords: Image segmentation, Bone, X-ray imaging
Abstract: Correctly detecting and identifying bones in radiographic images are the first stages of every orthopaedic procedure. This apparently simple task is mainly performed by human operators before performing more complex operations. Automatic detection and identification of bones in radiographic images, especially if the field of view and the position of the patient are not known a priori, remains a challenging task. In this paper, lower-limb bones are automatically detected and identified on biplanar X-ray images with varying fields of view and two main orientations of the patient with respect to the imaging device. The proposed method uses data augmentation to improve the training of a deep learning method to identify the lower-limb bones. We used 30 biplanar radiographs with varying fields of view to validate the proposed method. We obtained a global accuracy (mean±std) of 96.75±0.01% and a Dice coefficient of 93.85±0.02%, proving the usefulness of the proposed method.
|
|
10:30-11:30, Paper WeP4O-04.7 | Add to My Program |
End-To-End Vertebra Localization and Level Detection in Weakly Labelled 3d Spinal Mr Using Cascaded Neural Networks |
van Sonsbeek, Tom | University of British Columbia |
Danaei, Pardiss | University of British Columbia |
Behnami, Delaram | University of British Columbia |
Jafari, Mohammad Hossein | University of British Columbia |
Asgharzadeh, Parisa | UBC |
Rohling, Robert | University of British Columbia |
Abolmaesumi, Purang | UBC |
Keywords: Spine, Computer-aided detection and diagnosis (CAD), Machine learning
Abstract: Localization and identification of vertebrae in 3D MR volumes is a crucial first step for diagnosis and management of spinal conditions. Automating this process can save radiologists significant time and clicks. In this paper, we propose a novel learning-based approach consisting of two cascaded networks that perform simultaneous identification and localization of vertebrae. The first network performs slice-based level detection of full 3D sagittal volumes using an adaptive loss function that adjusts the weights of its loss terms during training, and outputs estimated center slices of each vertebrae. The sagittal slice is then divided into sub-volumes each containing a single vertebra. These sub-volumes are inputted into the second network for binary classification and localization of the vertebrae. Our method only requires centroid annotation (performed manually), a statistical model then provides an approximation of the volumetric segmentation for ground truth data. With this method, a vertebra identification rate of 82% was achieved.
|
|
10:30-11:30, Paper WeP4O-04.8 | Add to My Program |
Automatic Radiographic Quantification of Joint Space Narrowing Progression in Rheumatoid Arthritis Using Poc |
Ou, Yafei | Hokkaido University |
Ambalathankandy, Prasoon | Hokkaido University |
Shimada, Takeshi | Hokkaido University |
Kamishima, Tamotsu | Hokkaido University |
Ikebe, Masayuki | Hokkaido University |
Keywords: X-ray imaging, Bone, Image segmentation
Abstract: This paper is an application of image processing tech- niques for computer-aided diagnosis of Rheumatoid Arthritis (RA). Accurately measuring the progression of joint space narrowing (JSN) is crucial during medical treatment and in imaging biomarkers in clinical trials. In this paper, we an- alyze sequential radiographic images of patients who have rheumatoid arthritis in hands using image processing tech- niques. Phase only correlation (POC) is used to detect the progression of JSN between images. A new image processing algorithm is proposed to segment joint images so as to elim- inate the mutual interference when measuring the movement of the upper and lower bones by POC. We found that the tex- ture feature on bones will greatly affect the accuracy of POC. Median filter is used to eliminate the effect of texture, and excellent results are obtained in practice. Additionally, the progress of JSN is measured accurately in our method. This can be beneficial for doctors in the identification of disease stages.
|
|
10:30-11:30, Paper WeP4O-04.9 | Add to My Program |
Masseter Muscle Segmentation from Cone-Beam CT Images Using Generative Adversarial Network |
Zhang, Yungeng | Peking University |
Pei, Yuru | Peking University |
Qin, Haifang | Peking University |
Guo, Yuke | Luoyang Institute of Science and Technology |
Ma, Gengyu | USens Inc |
Xu, Tianmin | Peking University |
Zha, Hongbin | Peking University |
Keywords: Computed tomography (CT), Muscle, Image segmentation
Abstract: Masseter segmentation from noisy and blurry cone-beam CT (CBCT) images is a challenging issue considering the device-specific image artefacts. In this paper, we propose a novel approach for noise reduction and masseter muscle segmentation from CBCT images using a generative adversarial network (GAN)-based framework. We adapt the regression model of muscle segmentation from traditional CT (TCT) images to the domain of CBCT images without using prior paired images. The proposed framework is built upon the unsupervised CycleGAN. We mainly address the shape distortion problem in the unsupervised domain adaptation framework. A structure-aware constraint is introduced to guarantee the shape preservation in the feature embedding and image generation processes. We explicitly define a joint embedding space of both the TCT and CBCT images to exploit the intrinsic semantic representation, which is key to the intra- and cross-domain image generation and muscle segmentation. The proposed approach is applied to clinically captured CBCT images. We demonstrate both the effectiveness and efficiency of the proposed approach in noise reduction and muscle segmentation tasks compared with the state-of-the-art.
|
|
WeP4O-05 Poster Session, Foyer |
Add to My Program |
Lung Image Analysis - Poster |
|
|
|
10:30-11:30, Paper WeP4O-05.1 | Add to My Program |
Region Proposal Networks with Contextual Selective Attention for Real-Time Organ Detection |
Mansoor, Awais | Children's National Health System |
Porras, Antonio R. | Children's National Medical Center |
Linguraru, Marius George | Children's National Health System |
Keywords: Machine learning, Lung, X-ray imaging
Abstract: State-of-the-art methods for object detection use region proposal networks (RPN) to hypothesize object location. A RPN is a fully convolutional network that simultaneously predicts object bounding boxes and emph{objectness} scores at each location in the image. These networks provide a full image convolutional feature map with a set of object bounding box proposals, which are used by subsequent detection architectures. Unlike natural images for which RPN algorithms were originally designed, most medical images are acquired following standard protocols, thus organs in the image are typically at a similar location and possess similar geometrical characteristics (e.g. scale, aspect-ratio, etc.). Therefore, medical image acquisition protocols hold critical localization and geometric information that can be incorporated into the convolutional feature map of RPN for faster and more accurate organ detection. This paper presents a novel attention mechanism for proposal networks for the detection of organs in medical images by incorporating imaging protocol information into the RPN. Our novel selective attention approach (i) effectively shrinks the search space inside the feature map, (ii) appends useful localization information to the hypothesized proposal for the detection architecture to learn where to look for each organ, and (iii) modifies the pyramid of regression references in the RPN by incorporating organ- and modality-specific information, which results in additional time reduction. We evaluated our proposed framework on a dataset of 668 chest X-ray images obtained from a diverse set of sources. Our results demonstrate superior performance for the detection of the lung field compared to the state-of-the-art, both in terms of detection accuracy, demonstrating an improvement of >7% in Dice score, and reduced processing time by 27.53% due to fewer hypotheses.
|
|
10:30-11:30, Paper WeP4O-05.2 | Add to My Program |
Image-Based Survival Prediction for Lung Cancer Patients Using CNNs |
Haarburger, Christoph | RWTH Aachen University |
Weitz, Philippe | RWTH Aachen University |
Oliver, Rippel | RWTH Aachen University |
Merhof, Dorit | RWTH Aachen University |
Keywords: Lung, ROC analysis, Computed tomography (CT)
Abstract: Traditional survival models such as the Cox proportional hazards model are typically based on scalar or categorical clinical features. With the advent of increasingly large image datasets, it has become feasible to incorporate quantitative image features into survival prediction. So far, this kind of analysis is mostly based on radiomics features, i.e. a fixed set of features that is mathematically defined a priori. To capture highly abstract information, it is desirable to learn the feature extraction using convolutional neural networks. However, for tomographic medical images, model training is difficult because on the one hand, only few samples of 3D image data fit into one batch at once and on the other hand, survival loss functions are essentially ordering measures that require large batch sizes. In this work, we show that by simplifying survival analysis to median survival classification, convolutional neural networks can be trained with small batch sizes and learn features that predict survival equally well as end-to-end hazard prediction networks. Our approach outperforms (mean c-index = 0.623) the previous state of the art (mean c-index = 0.609) on a publicly available lung cancer dataset.
|
|
10:30-11:30, Paper WeP4O-05.3 | Add to My Program |
Learning to Segment the Lung Volume from Ct Scans Based on Semi-Automatic Ground-Truth |
Sousa, Patrick | INESC TEC |
Galdran, Adrian | INESC TEC Porto |
Costa, Pedro | INESC TEC |
Campilho, Aurélio | Universidade Do Porto, Instituto De Engenharia Biomédica |
Keywords: Lung, Image segmentation, Computed tomography (CT)
Abstract: Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can be a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can be a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher-dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.
|
|
10:30-11:30, Paper WeP4O-05.4 | Add to My Program |
Pulmonary Lobe Segmentation Using a Sequence of Convolutional Neural Networks for Marginal Learning |
Gerard, Sarah | The University of Iowa |
Reinhardt, Joseph M. | The University of Iowa |
Keywords: Computed tomography (CT), Lung, Image segmentation
Abstract: Segmentation of the pulmonary lobes in computed tomography images is an important precursor for characterizing and quantifying disease patterns, regional functional analysis, and determining treatment interventions. With the increasing resolution and quantity of scans produced in the clinic automatic and reliable lobar segmentation methods are essential for efficient workflows. In this work, a deep learning framework is proposed that utilizes convolutional neural networks for segmentation of fissures and lobes in computed tomography images. A novel pipeline is proposed that consists of a series of 3D convolutional neural networks to marginally learn the lobe segmentation. The method was evaluated extensively on a dataset of 1076 CT images from the COPDGene clinical trial, consisting of scans acquired multiple institutions using various scanners. Overall the method achieved median Dice Coefficient of 0.993 and a median average symmetric surface distance of 0.138 mm across all lobes. The results show the method is robust to different inspiration levels, pathologies, and image quality.
|
|
10:30-11:30, Paper WeP4O-05.5 | Add to My Program |
Enhanced Generative Model for Unsupervised Discovery of Spatially-Informed Macroscopic Emphysema: The Mesa Copd Study |
Gan, Yu | The University of Alabama |
Yang, Jie | Columbia University |
Smith, Benjamin M. | Department of Medicine, College of Physicians and Surgeons, Colu |
Balte, Pallavi | Columbia University Medical Center |
Hoffman, Eric | University of Iowa |
Hendon, Christine | Columbia University |
Barr, R. Graham | Columbia University Medical Center |
Laine, Andrew | Columbia University |
Angelini, Elsa | Imperial NIHR BRC, Imperial College London |
Keywords: Computed tomography (CT), Lung, Machine learning
Abstract: Pulmonary emphysema, overlapping with Chronic Obstructive Pulmonary Disorder (COPD), contributes to a significant amount of morbidity and mortality annually. Computed tomography is used for in vivo quantification of emphysema and labeling into three standard subtypes at a macroscopic level. Unsupervised learning of texture patterns has great potential to discover more radiological emphysema subtypes. In this work, we improve a probabilistic Latent Dirichlet Allocation (LDA) model to discover spatially-informed lung macroscopic patterns (sLMPs) from previously learned spatially-informed lung texture patterns (sLTPs). We exploit a specific reproducibility metric to empirically tune the number of sLMPs and the size of patches. Experimental results on the MESA COPD cohort show that our algorithm can discover highly reproducible sLMPs, which are able to capture relationships between sLTPs and preferred localizations within the lung. The discovered sLMPs also achieve higher prediction accuracy of three standard emphysema subtypes than in our previous implementation.
|
|
10:30-11:30, Paper WeP4O-05.6 | Add to My Program |
Data Augmentation for Chest Pathologies Classification |
Sirazitdinov, Ilyas | Innopolis University |
Kholiavchenko, Maksym | Innopolis University |
Kuleev, Ramil | Innopolis University |
Ibragimov, Bulat | Stanford University |
Keywords: Machine learning, X-ray imaging, Lung
Abstract: Diagnosis of lung pathologies from chest X-rays is one of the main tasks in modern image-based diagnosis. Automation of lung pathology diagnosis is greatly facilitated by recent developments in deep learning-based clinical decision making. The performance of deep learning solutions has the tendency to improve with the growing number of training X-rays, which can be artificially increased by augmentation of training X-rays. Commonly, different augmentation approaches are greedily applied to the available training data without investigating the necessity and actual contribution of individual augmentation. Our work aims to fill this gap in computerized lung pathology diagnosis and evaluate the contribution of different data augmentation approaches by leveraging the publicly available ChestX-ray14 dataset.
|
|
10:30-11:30, Paper WeP4O-05.7 | Add to My Program |
Multiple Instance Learning for Malignant vs. Benign Classification of Lung Nodules in Thoracic Screening Ct Data |
Safta, Wiem | University of Louisville |
Farhangi, Mehdi | University of Louisville |
Veasey, Ben | University of Louisville |
Amini, Amir | University of Louisville |
Frigui, Hichem | University of Louisville |
Keywords: Machine learning, Pattern recognition and classification, ROC analysis
Abstract: Multiple Instance Learning (MIL) is proposed for Computer Aided Diagnosis (CADx) without predefined Regions Of Interest (ROIs) from lung cancer screening thoracic CT scans. The method was used to classify nodules as malignant or benign on 225 malignant and 210 benign samples from the publicly available Lung Image database consortium Image Collection (LIDC-IDRI). Subsequent to feature selection based on the Gray Level Co-occurrence Matrix (GLCM), 5-fold cross-validation was carried out where training was performed on 4 folds with testing on the 5th fold in a round robin fashion. The classification was performed with Support Vector Machines for Multiple-Instance Learning (MI-SVM) classifier. The proposed method has been compared to the Single Instance Learning (SIL) paradigm based on ground truth regions provided by radiologists and to other state of the art methodologies and was proven to outperform them with resulting average Receiver Operating Characteristic Area Under Curve (AUC), Specificity, Sensitivity and Accuracy of: 0.9767, 0.9524, 0.9111 and 0.9310 respectively.
|
|
10:30-11:30, Paper WeP4O-05.8 | Add to My Program |
Automatic Pulmonary Lobe Segmentation Using Deep Learning |
Tang, Hao | University of California, Irvine |
Zhang, Chupeng | University of California, Irvine |
Xie, Xiaohui | University of California, Irvine |
Keywords: Computed tomography (CT), Lung, Machine learning
Abstract: Pulmonary lobe segmentation is an important task for pulmonary disease related Computer Aided Diagnosis systems (CADs). Classical methods for lobe segmentation rely on successful detection of fissures and other anatomical information such as the location of blood vessels and airways. With the success of deep learning in recent years, Deep Convolutional Neural Network (DCNN) has been widely applied to analyze medical images like Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), which, however, requires a large number of ground truth annotations. In this work, we release our manually labeled 50 CT scans which are randomly chosen from the LUNA16 dataset and explore the use of deep learning on this task. We propose pre-processing CT image by cropping region that is covered by the convex hull of the lungs in order to mitigate the influence of noise from outside the lungs. Moreover, we design a hybrid loss function with dice loss to tackle extreme class imbalance issue and focal loss to force model to focus on pixels that are hard to be discriminated. To validate the robustness and performance of our proposed framework trained with a small number of training examples, we further tested our model on CT scans from an independent dataset. Experimental results show the robustness of the proposed approach, which consistently improves performance across different datasets by a maximum of 5.87% as compared to a baseline model.
|
|
10:30-11:30, Paper WeP4O-05.9 | Add to My Program |
Regression of the Navier-Stokes Equation Solutions for Pulmonary Airway Flow Using Neural Networks |
de los Ojos Araúzo, Diego | Universidad Politécnica De Madrid |
Nardelli, Pietro | Brigham and Women's Hospital, Harvard Medical School |
San Jose Estepar, Raul | Brigham Women's Hospital and Harvard Medical School |
Keywords: Lung, Machine learning, Other-modality
Abstract: Computerized fluid dynamics models of particle deposition in the human airways are used to characterize deposition patterns that enable the study of lung diseases like asthma and chronic obstructive pulmonary disease (COPD). Despite this fact, the influence of patient-specific geometry on the deposition efficiency and patterns is not well documented nor modeled. In part, this is due to the complexity of simulating the full CFD solution in patient-specific airway geometries, a factor that becomes a major hurdle for patient-specific studies given the complexity of the geometry of human lungs and their related airflow. In this paper, we present an approximation method based on neural networks to the Navier-Stokes equations that govern airway flow in a Physiologically Realistic Bifurcation (PRB) model for the conducting region of a single generation human airway branch. The flow distribution and deposition of tobacco particles have been simulated for the inspiratory regime using ANSYS Fluent and a neural network has been trained to regress the mean velocity and mass flow components. Our results show that the approximation works well under the modeled assumptions and the serial application of this model to a two-generation airway geometry provides reasonable approximations.
|
|
WeP4O-06 Poster Session, Foyer |
Add to My Program |
Pattern Recognition and Classification - Poster |
|
|
|
10:30-11:30, Paper WeP4O-06.1 | Add to My Program |
Intelligent Glaucoma Diagnosis Via Active Learning and Adversarial Data Augmentation |
Wang, Zhanyu | Tsinghua University |
Wang, Zhe | Sensetime Group Limited |
Qu, Guoxiang | Guangdong Key Lab of Computer Vision & Virtual Reality, Shenzhen |
Li, Fei | Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmolo |
Yuan, Ye | C-MER Dennis Lam Eye Hospital, Shenzhen, China |
Lam, Dennis SC | C-MER Dennis Lam Eye Hospital, Shenzhen, China |
Zhang, Xiulan | Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmolo |
Qiao, Yu | Guangdong Key Lab of Computer Vision & Virtual Reality, Shenzhen |
Keywords: Classification, Machine learning, Image synthesis
Abstract: Glaucoma is the leading causes of blindness in the world. We develop a convolutional neural network for glaucoma diagnosis based on visual fields (VF), which is the gold standard to show functional damages of optic nerve. However, we have to deal with two major problems common in medical imaging domains. 1) It is difficult and expensive to label a large amount of data, while most modern deep learning methods require it. 2) Severe data imbalance makes the classifier easily over-fitting. In this work, for the first problem, we train an AutoEncoder with all the data (labeled and unlabeled) to obtain good features and introduce an active learning (AL) scheme to select and annotate a few most valuable samples from the unlabeled date set for model training. Then, we address the second problem by augmenting negative samples generated by a deep convolutional generative adversarial network (DCGAN). Experiments on our dataset (738 Samples) suggest the effectiveness of the proposed approach.
|
|
10:30-11:30, Paper WeP4O-06.2 | Add to My Program |
Lesion Classification of Wireless Capsule Endoscopy Images |
Yang, Wenming | Tsinghua University |
Cao, Yaxing | Tsinghua University |
Zhao, Qian | Tsinghua University |
Ren, Yong | Tsinghua University |
Liao, Qingmin | Tsinghua University |
Keywords: Pattern recognition and classification, Classification, Endoscopy
Abstract: In this paper, we propose a scheme to classify different Wireless Capsule Endoscopy (WCE) lesion images for diagnosis. The main contribution is to quantify multi-scale pooled channel-wise information and merge multi-level features together by explicitly modeling interdependencies between all feature maps of different convolution layers. Firstly, feature maps are resized into multi-scale size with bicubic interpolation, and then down-sampling convolution method is adopted to obtain pooled feature maps of the same resolution, and finally one by one convolution kernels are utilized to fuse feature maps after quantization operation based on channel-wise attention mechanism in order to enhance feature extraction of the proposed architecture. Preliminary experimental result shows that our proposed scheme with less model parameters achieves competitive results compared to the state-of-the-art methods in WCE image classification task.
|
|
10:30-11:30, Paper WeP4O-06.3 | Add to My Program |
End-To-End Discriminative Deep Network for Liver Lesion Classification |
Perdigon Romero, Francisco | Federal University of Amazonas |
Diler, André | Polytechnique Montreal |
Bisson-Gregoire, Gabriel | Polytechnique Montreal |
Turcotte, Simon | Centre Hospitalier Université De Montreal |
Lapointe, Real | Centre Hospitalier Université De Montreal |
Vandenbroucke-Menu, Franck | Centre Hospitalier Université De Montreal |
Tang, An | Radiology, Hopital Saint-Luc, Universite De Montreal |
Kadoury, Samuel | Polytechnique Montreal |
Keywords: Classification, Liver, Computer-aided detection and diagnosis (CAD)
Abstract: Colorectal liver metastasis is one of most aggressive liver malignancies. While the definition of lesion type based on CT images determines the diagnosis and therapeutic strategy, the discrimination between cancerous and non-cancerous lesions are critical and requires highly skilled expertise, experience and time. In the present work we introduce an end-to-end deep learning approach to assist in the discrimination between liver metastases from colorectal cancer and benign cysts in abdominal CT images of the liver. Our approach incorporates the efficient feature extraction of InceptionV3 combined with residual connections and pre-trained weights from ImageNet. The architecture also includes fully connected classification layers to generate a probabilistic output of lesion type. We use an in house clinical biobank with 230 liver lesions originating from 63 patients. With an accuracy of 0.96 and a F1-score of 0.92, the results obtained with the proposed approach surpasses state of the art methods. Our work provides the basis for incorporating machine learning tools in specialized radiology software to assist physicians in the early detection and treatment of liver lesions.
|
|
10:30-11:30, Paper WeP4O-06.4 | Add to My Program |
Dual Adversarial Autoencoder for Dermoscopic Image Generative Modeling |
Yang, Hao-Yu | Yale University |
Staib, Lawrence H. | Yale University |
Keywords: Modeling - Image formation, Skin, Classification
Abstract: Skin cancer is a severe public health issue in the United States and worldwide. While Computer Aided Diagnosis (CAD) of dermoscopic images shows potential in accelerating diagnosis and improving accuracy, numerous issues remain that may be addressed by generative modeling. Major challenges in automated skin lesion classification include manual efforts required to label new training data and a relatively limited amount of data compared to more generalized computer vision tasks. We propose a novel generative model based on a dual discrimination training algorithm for autoencoders. At each training iteration, the encoder and decoder undergo two stages of adversarial training by two individual discriminator networks. The algorithm is end-to-end trainable with standard back-propagation. In contrast with traditional autoencoders, our method incorporates extra constraints via adversarial training, which results in visually realistic synthetic data. We demonstrate the versatility of the proposed method and applications on numerous tasks including latent space visualization, data augmentation, and image denoising.
|
|
10:30-11:30, Paper WeP4O-06.5 | Add to My Program |
Surrogate Supervision for Medical Image Analysis: Effective Deep Learning from Limited Quantities of Labeled Data |
Tajbakhsh, Nima | ASU |
Hu, Yufei | VoxelCloud Inc |
Cao, Junli | Voxelcloud Inc |
Yan, Xingjian | Voxelcloud Inc |
Xiao, Yi | Department of Radiology, Changzheng Hospital, Second Military M |
Lu, Yong | Radiology of Department, Ruijin Hospital, Shanghai Jiaotong Univ |
Liang, Jianming | Arizona State University |
Terzopoulos, Demetri | University of California, Los Angeles |
Ding, Xiaowei | VOXELCLOUD INC |
|
|
10:30-11:30, Paper WeP4O-06.6 | Add to My Program |
A Dual Stream Network for Tumor Detection in Hyperspectral Images |
Weijtmans, Pim Jan Christiaan | Eindhoven University of Technology, Philips Research Eindhoven |
Shan, Caifeng | Philips Research |
Tan, Tao | Eindhoven University of Technology |
Brouwer de Koning, Susan G | Netherlands Cancer Institute |
Ruers, T.J.M. | Netherlands Cancer Institute |
Keywords: Multi- and Hyper-spectral imaging, Machine learning, Classification
Abstract: Hyperspectral imaging has become an emerging imaging modality for medical applications. In this work, we propose to combine both the spectral and structural information in the hyperspectral data cube for tumor detection in tongue tissue. A dual stream network is designed, with a spectral and a structural branch. Hyperspectral data (480 to 920 nm) is collected from 7 patients with tongue squamous cell carcinoma. Histopathological analysis provided ground truth labels. The proposed dual stream model outperforms the pure spectral and structural approaches with areas under the ROC-curve of 0.90, 0.87 and 0.85, respectively.
|
|
10:30-11:30, Paper WeP4O-06.7 | Add to My Program |
Recurrent Attention Mechanism Networks for Enhanced Classification of Biomedical Images |
Shaikh, Mazhar | Indian Insitute of Technology Madras |
Kollerathu, Varghese Alex | Department of Engineering Design, Indian Institute of Technology |
Krishnamurthi, Ganapathy | Indian Institute of Technology Madras |
Keywords: Machine learning, Classification, Brain
Abstract: Convolutional neural networks achieve state of the art results for a variety of tasks. However, this improved performance comes at the cost of performing convolutional operations throughout the entire image. Resizing of images to manageable levels is one of the often used techniques so as to reduce this computational overhead. On medical images, lesions are represented by a small proportion of pixels and resizing may lead to loss of information. Recurrent attention mechanism (RAMs) based network aid in reducing computational overhead while performing convolutional operations on high resolution images. We utilize a RAM based network for the task of classification of biomedical images. The proposed technique was tested on 2 distinct classification task viz; classification of brain tumors from Magnetic Resonance images & predicting the severity of diabetic macular edema from fundus images. For the former task (n=300), the technique achieved state of the art accuracy of 97%. While on the latter (n=89), the proposed model achieved an accuracy of 93.37%
|
|
10:30-11:30, Paper WeP4O-06.8 | Add to My Program |
Adhd Classification within and Cross Cohort Using an Ensembled Feature Selection Framework |
Yao, Dongren | Institute of Automation, Chinese Academy of Sciences |
Sun, Hailun | University of Chinese Academy of Sciences |
Guo, Xiaojie | Peking University |
Calhoun, Vince | The Mind Research Network/University of New Mexico |
Sun, Li | Peking University |
Sui, Jing | Institute of Automation, Chinese Academy of Science |
Keywords: Dimensionality reduction, Machine learning, Pattern recognition and classification
Abstract: Attention-deficit/hyperactivity disorder (ADHD) is a childhood-onset neurodevelopmental disorder that often persists into adulthood. However, as lacking objective measures, several studies have questioned the stability in diagnosing of ADHD from childhood to adulthood. In this study, we propose a novel feature selection framework based on functional connectivity (FCs) pattern, the so called ‘FS_RIWEL’, which could classify ADHD from age-matched healthy controls (HCs) with ~80% accuracy (both for children and adults). More importantly, the feature space learned from child ADHD dataset is able to discriminate adult ADHD from HCs at ~70% accuracy. To the best of our knowledge, this is the first attempt to perform a cross-cohort prediction between the adult and child ADHD using FC features. In addition, the most frequently selected FCs indicate that ADHD exhibit widely-impaired FC patterns in frontoparietal, basal ganglia, cerebellum network and so on suggesting that FCs may serve as potential biomarkers for ADHD diagnosis.
|
|
10:30-11:30, Paper WeP4O-06.9 | Add to My Program |
Exploiting Visual and Report-Based Information for Chest X-Ray Analysis by Jointly Learning Visual Classifiers and Topic Models |
Daniels, Zachary | Rutgers University |
Metaxas, Dimitris | Rutgers University |
Keywords: Pattern recognition and classification, Machine learning, X-ray imaging
Abstract: Manual examination of chest x-rays is a time consuming process that involves significant effort by expert radiologists. Recent work attempts to alleviate this problem by developing learning-based automated chest x-ray analysis systems that map images to multi-label diagnoses using deep neural networks. These methods are often treated as black boxes, or they output attention maps but don't explain why the attended areas are important. Given data consisting of a frontal-view x-ray, a set of natural language findings, and one or more diagnostic impressions, we propose a deep neural network model that during training simultaneously 1) constructs a topic model which clusters key terms from the findings into meaningful groups, 2) predicts the presence of each topic for a given input image based on learned visual features, and 3) uses an image's predicted topic encoding as features to predict one or more diagnoses. Since the net learns the topic model jointly with the classifier, it gives us a powerful tool for understanding which semantic concepts the net might be exploiting when making diagnoses, and since we constrain the net to predict topics based on expert-annotated reports, the net automatically encodes some higher-level expert knowledge about how to make diagnoses.
|
|
10:30-11:30, Paper WeP4O-06.10 | Add to My Program |
Producing Radiologist Quality Reports for Interpretable Deep Learning |
Gale, William | University of Adelaide |
Oakden-Rayner, Luke | University of Adelaide |
Carneiro, Gustavo | University of Adelaide |
Palmer, Lyle | University of Adelaide |
Bradley, Andrew Peter | Queensland University of Technology |
Keywords: Pattern recognition and classification, Bone, X-ray imaging
Abstract: Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to ``open the black box'' of medical decision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by doctors compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone.
|
|
10:30-11:30, Paper WeP4O-06.11 | Add to My Program |
Robust Learning at Noisy Labeled Medical Images: Applied to Skin LEsion Classification |
Xue, Cheng | Chinese University of Hong Kong |
Dou, Qi | The Chinese University of Hong Kong |
Shi, Xueying | CUHK |
Chen, Hao | The Chinese University of Hong Kong |
Heng, Pheng Ann | The Chinese University of Hong Kong |
Keywords: Skin, Classification, Machine learning
Abstract: Deep neural networks (DNNs) have achieved great success in a wide variety of medical image analysis tasks. However, these achievements indispensably rely on the accurately-annotated datasets. If with the noisy-labeled images, the training procedure will immediately encounter difficulties, leading to a suboptimal classifier. This problem is even more crucial in the medical field, given that the annotation quality requires great expertise. In this paper, we propose an effective iterative learning framework for noisy-labeled medical image classification, to combat the lacking of high quality annotated medical data. Specifically, an online uncertainty sample mining method is proposed to eliminate the disturbance from noisy-labeled images. Next, we design a sample re-weighting strategy to preserve the usefulness of correctly-labeled hard samples. Our proposed method is validated on skin lesion classification task, and achieved very promising results.
|
|
10:30-11:30, Paper WeP4O-06.12 | Add to My Program |
Saliency-Driven System with Deep Learning for Cell Image Classification |
Ferreira, Daniel | Federal University of Ceara |
Ramalho, Geraldo | Federal Institute of Ceara |
Medeiros, Fátima N.S. | Federal University of Ceara |
Bianchi, Andrea G. Campos | Federal University of Ouro Preto |
Carneiro, Claudia | Federal University of Ouro Preto |
Ushizima, Daniela | Lawrence Berkeley National Laboratory |
Keywords: Microscopy - Light, Confocal, Fluorescence, Cervix, Computer-aided detection and diagnosis (CAD)
Abstract: This paper describes our automatic cell image classification algorithm that explores expert's eye tracking data combined to convolutional neural networks. Our framework selects regions of interest that attract cytologists attention, then it focuses computation on cell classification of these specific sub-images. Our contribution is to fuse deep learning to saliency maps from eye-tracking into an approach that by-passes segmentation to detect abnormal cells from Pap smear microscopy under real noisy conditions, artifacts and occlusion. Preliminary results show high classification accuracy of ~90% during tasks of locating and identifying critical cells within three levels: normal, low-risk disease and high-risk disease. We validate our results on 111 images containing 3,183 cells and obtained an average runtime of 4.5 seconds per image.
|
|
WeP4O-07 Poster Session, Foyer |
Add to My Program |
Cancer Imaging and Analysis |
|
|
|
10:30-11:30, Paper WeP4O-07.1 | Add to My Program |
Towards an Interpretable Radiomics Model for Classifying Renal Cell Carcinomas Subtypes: A Radiogenomics Assessment |
Li, Zhi-Cheng | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Wu, Guang-yu | Renji Hospital, Shanghai Jiao Tong University |
Zhang, Jinheng | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Wang, Zhongqiu | The Affiliated Hospital of Nanjing University of Chinese Medicin |
Liu, Guiqin | Renji Hospital, Shanghai Jiao Tong University |
Liang, Dong | Shenzhen Institutes of Advanced Technology |
Keywords: Computed tomography (CT), Kidney, Computer-aided detection and diagnosis (CAD)
Abstract: Differentiating clear cell renal cell carcinomas (ccRCC) from non-ccRCC subtypes is of essential importance as they have substantially different prognosis and therapeutic pathways. Radiomics is an imaging-based approach successfully applied in many classification tasks of cancer subtypes. Despite its strong performance, it's challenging to understand why a radiomics model makes a particular prediction. This paper presented an interpretable radiomics model by extracting all-relevant features from multiphasic CT for differentiating ccRCC from non-ccRCC. The biological meaning of radiomics was investigate by assessing the possible radiogenomics link between the imaging features and a key ccRCC driver gene–the von Hippel-Lindau (VHL) mutation. The model with eight all-relevant features achieved an AUC 0.949 and an accuracy 92.9%. Five features were significantly associated with VHL mutation (FDR p<.05). It implied that radiomics model can be accurate and interpretable when the imaging features reflect underlying molecular basis of cancer.
|
|
10:30-11:30, Paper WeP4O-07.2 | Add to My Program |
Radiomic-Based Framework for Early Diagnosis of Lung Cancer |
Shaffie, Ahmed | University of Louisville |
Soliman, Ahmed | University of Louisville |
Abu Khalifeh, Hadil | Abu Dhabi University |
Ghazal, Mohammed | Abu Dhabi University |
Taher, Fatma | Electrical and Computer Engineering Department, Khalifa Universi |
Elmaghraby, Adel | University of Louisville |
Keynton, Robert | Bioengineering Department, University of Louisville |
El-baz, Ayman | University of Louisville |
Keywords: Computer-aided detection and diagnosis (CAD), Lung, Computed tomography (CT)
Abstract: This paper proposes a new framework for pulmonary nodule diagnosis using radiomic features extracted from a single computed tomography (CT) scan. The proposed framework integrates appearance and shape features to get a precise diagnosis for the extracted lung nodules. The appearance features are modeled using 3D Histogram of Oriented Gradient (HOG) and higher-order Markov Gibbs random field (MGRF) model because of their ability to describe the spatial non-uniformity in the texture of the nodule regardless of its size. The shape features are modeled using Spherical Harmonic expansion and some basic geometric features in order to have a full description of the shape complexity of the nodules. Finally, all the modeled features are fused and fed to a stacked autoencoder to differentiate between the malignant and benign nodules. Our framework is evaluated using 727 nodules which are selected from the Lung Image Database Consortium (LIDC) dataset, and achieved classification accuracy, sensitivity, and specificity of 93:12%, 92:47%, and 93:60% respectively.
|
|
10:30-11:30, Paper WeP4O-07.3 | Add to My Program |
Deep Metamemory - a Generic Framework for Stabilized One-Shot Confidence Estimation in Deep Neural Networks and Its Application on Colorectal Cancer Liver Metastases Growth Prediction |
Katzmann, Alexander | Siemens Healthcare GmbH |
Mühlberg, Alexander | Siemens Healthcare GmbH |
Suehling, Michael | Siemens AG |
Noerenberg, Dominik | University Hospital Großhadern, Ludwig-Maximilians-University Mu |
Gross, Horst-Michael | University of Technology Ilmenau |
Keywords: Machine learning, Liver, Computed tomography (CT)
Abstract: With the rise of deep learning within medical applications, questions about classification confidence become of major interest as misclassifications might have serious impact on human health. While multiple ways of confidence estimation have been proposed, most of them suffer from computational inefficiency or low statistical accuracy. We utilize a modified version of the method introduced by DeVries et al. for one-shot confidence estimation and show its application for colorectal cancer liver metastases growth prediction. Furthermore, we propose a psychologically motivated generalized training framework called ''deep metamemory'' comparable to the idea of curriculum learning, which utilizes confidence estimation for efficient training augmentation with improved classification performance on unseen data.
|
|
10:30-11:30, Paper WeP4O-07.4 | Add to My Program |
Collaborative Clustering of Subjects and Radiomic Features for Predicting Clinical Outcomes of Rectal Cancer Patients |
Liu, Hangfan | University of Pennsylvania |
Li, Hongming | University of Pennsylvania |
Boimel, Pamela | University of Pennsylvania |
Janopaul-Naylor, James | University of Pennsylvania |
Zhong, Haoyu | University of Pennsylvania |
Xiao, Ying | University of Pennsylvania |
Ben-Josef, Edgar | University of Pennsylvania |
Fan, Yong | University of Pennsylvania |
Keywords: Radiation therapy, planing and treatment, Machine learning, Image-guided treatment
Abstract: Most machine learning approaches in radiomics studies ignore the underlying difference of radiomic features computed from heterogeneous groups of patients, and intrinsic correlations of the features are not fully exploited yet. In order to better predict clinical outcomes of cancer patients, we adopt an unsupervised machine learning method to simultaneously stratify cancer patients into distinct risk groups based on their radiomic features and learn low-dimensional representations of the radiomic features for robust prediction of their clinical outcomes. Based on nonnegative matrix tri-factorization techniques, the proposed method applies collaborative clustering to radiomic features of cancer patients to obtain clusters of both the patients and their radiomic features so that patients with distinct imaging patterns are stratified into different risk groups and highly correlated radiomic features are grouped in the same radiomic feature clusters. Experiments on a FDG-PET/CT dataset of rectal cancer patients have demonstrated that the proposed method facilitates better stratification of patients with distinct survival patterns and learning of more effective low-dimensional feature representations that ultimately leads to accurate prediction of patient survival, outperforming conventional methods under comparison.
|
|
10:30-11:30, Paper WeP4O-07.5 | Add to My Program |
Tumor Burden Assessment in Lymphoma Patients: Hierarchical Analysis of Whole Body CT |
Bolluyt, Elijah | Stevens Institute of Technology |
Comaniciu, Alexandra | The Lawrenceville School |
Georgescu, Bogdan | Siemens Medical Solutions |
|
|
10:30-11:30, Paper WeP4O-07.6 | Add to My Program |
Novel Radiomic Features Based on Graph Theory for Pet Image Analysis |
Zhou, Zhiling | Massachusetts General Hospital and Harvard Medical School |
Guo, Ning | Massachusetts General Hospital/Harvard Medical School |
Cui, Jianan | Department of Radiology, Massachusetts General Hospital and Harv |
Meng, Xiaxia | Department of Radiology, Massachusetts General Hospital and Harv |
Hu, Yiwei | Yale University |
Bao, Han | Massachusetts General Hospital |
Li, Xiang | Harvard Medical School, Massachusetts General Hospital |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Keywords: Nuclear imaging (e.g. PET, SPECT), Quantification and estimation, Computer-aided detection and diagnosis (CAD)
Abstract: We proposed a series of new radiomic features for PET image analysis base on graph theory and network analysis. Current PET radiomic features are mostly developed or transferred from CT images analysis which mainly focus on texture information. PET images usually contain functional information with lower resolution. Thus current radiomic features lack interpretability and specificity for PET image quantification. Meanwhile, a large number of texture features have similar definitions which cause severe redundancy for analysis and classification task. We proposed novel radiomic features based on graph theory that can specifically represent PET image characters. Using a set of tools in graph analysis, a new series of PET radiomic features that reveal different attributes of tumor, particularly intratumoral heterogeneity, are extracted. We applied our proposed method to lung cancer diagnosis and prognosis to evaluate performance of new features. Using ANN as classifier, our graph-based features outperformed traditional PET radiomic features. Furthermore, the combination of our features and tradition features can achieve an even better performance. It indicates that our graph-based features reveal significant and unique information of tumor in PET images.
|
|
10:30-11:30, Paper WeP4O-07.7 | Add to My Program |
CT-NNBI: Method to Impute Gene Expression Data Using Dct Based Sparsity and Nuclear Norm Constraint with Split Bregman Iteration |
Gehlot, Shiv | Indraprastha Institute of Information Technology (IIIT-Delhi) |
Farswan, Akanksha | Indraprastha Institute of Information Technology, New Delhi |
Gupta, Anubha | IIIT Delhi |
Gupta, Ritu | AIIMS Delhi |
Keywords: Microarrays, Genes, Optimization method
Abstract: High dimensional genomics data such as microarray gene expression and RNA sequencing, generally, suffers from missing values. Incomplete data can adversely affect the downstream analysis for diagnostics and treatment. Several methods to impute missing values in gene expression data have been developed, but most of these work at high levels of observability. In this paper, we have proposed a novel 2-stage method, namely, CT-NNBI of imputing incomplete gene ex- pression matrices using Discrete Cosine Transform Domain Sparsity and Nuclear Norm Constraint with Split Bregman Iteration (CT-NNBI) that yields smaller imputation errors, consistently, at all levels of observability. The proposed method has been compared with the state-of-the-art matrix completion methods on three different cancer dataset and is observed to perform better. The validation of imputed data has been demonstrated on the application of classification.
|
|
10:30-11:30, Paper WeP4O-07.8 | Add to My Program |
Classification of Prostate Cancer: High Grade versus Low Grade Using a Radiomics Approach |
Castillo, Jose Manuel | Erasmus Medical Center |
Starmans, Martijn | Erasmus Medical Center |
Niessen, Wiro | Erasmus MC, University Medical Center Rotterdam |
Schoots, Ivo | Erasmus Medical Center |
Klein, Stefan | Erasmus MC |
Veenland, Jifke F. | Erasmus MC - University Medical Center Rotterdam |
Keywords: Prostate, Classification, MRI
Abstract: Prostate cancer (PCa) is currently the second leading cause of cancer-related death in men. Systematic biopsies are the standard of care for PCa diagnosis. However, biopsies are invasive and prone to sampling errors. With magnetic resonance imaging (MRI) the whole prostate tissue can be visualized non-invasively. In this study we evaluate a radiomics approach to classify suspected lesions, into high-grade and low-grade PCa. The data comprised MRI, histology of radical prostatectomy specimens and pathology reports of 40 patients. Histology and MRI were correlated obtaining 72 lesions. Features were extracted to train a Support Vector Machine as classifier. Our experiments were performed in a fully automated framework, using 100x random split cross-validation, including extensive algorithm selection and hyperparameter optimization on the training set in each cross-validation. Our method achieved an AUC of 77[0.66-0.87], sensitivity 0.74[0.57-0.91] and specificity of 0.66[0.50-0.82], demonstrating the potential of radiomics to classify PCa lesion based on MRI.
|
|
WeP4O-08 Poster Session, Foyer |
Add to My Program |
Reconstruction and Image Quality II (Abstracts) |
|
|
|
10:30-11:30, Paper WeP4O-08.1 | Add to My Program |
Journal Paper: Deep Learning Based Reconstruction Method for Sparse-View CT |
Zhang, Zhicheng | Virginia Polytechnic Institute and State University |
Dong, Xu | Virginia Polytechnic Institute and State University |
Vekhande, Swapnil | Virginia Tech |
Cao, Guohua | Virginia Polytechnic Institute and State University |
|
|
10:30-11:30, Paper WeP4O-08.2 | Add to My Program |
Target-Based Cbct Reconstruction from Optimized Projections Obtained Over Arbitrary Orientation for C-Arm |
Hatamikia, Sepideh | Austrian Center for Medical Innovation and Technology, Wiener Neu |
Biguri, Ander | Institute of Sound and Vibration Research, University of Southam |
Furtado, Hugo | University Clinic for Radiotherapy and Radiation Biology, Medica |
Kronreif, Gernot | Austrian Center for Medical Innovation and Technology, Wiener Neu |
Kettenbach, Joachim | Department of Diagnostic and Interventional Radiology and Nuclea |
Birkfellner, Wolfgang | Center for Medical Physics and Biomedical Engineering, Medical U |
Keywords: Image-guided treatment, Image reconstruction - analytical & iterative methods, Computed tomography (CT)
Abstract: Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular source-detector trajectory is used to acquire a high number 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intra-operative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is desirable for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measure the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. We use a CT scan from the phantom as the prior knowledge and use that as the digital phantom in our simulations to find the optimal trajectory for specific targets. Based on the simulation phase we have the optimal trajectories which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectory with minimal number of projections in order to localize the targets. We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation dose and has a potential for low dose CBCT-based interventions.
|
|
10:30-11:30, Paper WeP4O-08.3 | Add to My Program |
Accelerated Spiral Chemical Shift Imaging for Proton Density and T2* Fat-Water Quantification |
Karkouri, Jabrane | Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, F69621 |
Millioz, Fabien | Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, F69621 |
Troalen, Thomas | Siemens Healthineers |
Prost, Remy | CREATIS-LRMN |
Viallon, Magalie | Université de Lyon, CREATIS ; CNRS UMR5220 ; Inserm U1044 ; INSA-Lyon ; Université Lyon 1, France |
Ratiney, Helene | Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, F69621 |
|
|
10:30-11:30, Paper WeP4O-08.4 | Add to My Program |
Camera-Based In-Bore Actigraphy and Visual Feedback for Motion Artifact Reduction in Mri |
Krueger, Sascha | Philips Research |
Mazurkewitz, Peter | Philips Research |
Stehning, Christian | Philips Healthcare |
Sénégas, Julien | Philips Research |
|
|
10:30-11:30, Paper WeP4O-08.5 | Add to My Program |
Generalising Deep Learning MRI Reconstruction across Different Domains |
Ouyang, Cheng | Imperial College London |
Schlemper, Jo | Imperial College London |
Biffi, Carlo | Imperial College London |
Seegoolam, Gavin | Imperial College London |
Caballero, Jose | Imperial College London |
Price, Anthony N. | King's College London |
Hajnal, Joseph V. | King's College London |
Rueckert, Daniel | Imperial College London |
Keywords: Magnetic resonance imaging (MRI), Image reconstruction - analytical & iterative methods, Machine learning
Abstract: We look into robustness of deep learning based MRI reconstruction when tested on unseen contrasts and organs. We then propose to generalise the network by training with large publicly-available natural image datasets with synthesised phase information to achieve high cross-domain reconstruction performance which is competitive with domain-specific training. To explain its generalisation mechanism, we have also analysed patch sets for different training datasets.
|
|
10:30-11:30, Paper WeP4O-08.6 | Add to My Program |
An Iterative Delay-And-Sum-Based Reconstruction Algorithm for Breast Microwave Radar Imaging |
Reimer, Tyson | University of Manitoba |
Solis-Nepote, Mario | Research Institute in Oncology and Hematology, Winnipeg, MB. |
Pistorius, Stephen | University of Manitoba |
|
|
10:30-11:30, Paper WeP4O-08.7 | Add to My Program |
Advancing Analysis Techniques for Plantar Pressure Videos Via the CAD WALK Open-Access Database |
Booth, Brian G. | University of Antwerp |
Keijsers, Noel | Sint Maartenskliniek |
Huysmans, Toon | University of Antwerp |
Sijbers, Jan | University of Antwerp |
Keywords: Image acquisition, Validation, Other-modality
Abstract: While dynamic plantar pressure measurements are commonly used for clinical evaluation of gait-related problems, computational analysis techniques for these datasets are few and far between. To address this issue, we introduce an open-access database of plantar pressure videos for researchers to develop algorithms around.
|
|
10:30-11:30, Paper WeP4O-08.8 | Add to My Program |
Generation of Quasi-Linear-Array Ultrasound Images Using Cycle Generative Adversarial Networks |
Sun, Xiaofei | The University of Hong Kong |
Lee, Wei-Ning | The University of Hong Kong |
Keywords: Ultrasound, Machine learning, Image quality assessment
Abstract: In ultrasound imaging, different types of array probes produce B-mode images of different spatial characteristics. The linear-array probe generally produces higher and more spatially-uniform image quality than the curved-array probe given the same region of interest (ROI) at the same imaging depth. Therefore, this study aims at employing cycle generative adversarial networks (CycleGAN) [1] to translate a curved-array image into a quasi-linear-array image. The CycleGAN model is capable of training linear-curved array image pairs without the need of paired training data or pixel-wise correspondence. The model was tested on a commercial ultrasound phantom. Our results show that CycleGAN based on unpaired linear-curved array images could translate curved-array images into images that were as comparable as the linear-array ones. Keywords—Ultrasound imaging, cycle generative adversarial networks, linear array, curved array
|
|
10:30-11:30, Paper WeP4O-08.9 | Add to My Program |
Segmentation of Brachial Plexus in Ultrasound Images Based on Modified U-Net |
Mnacko, Tomas | Faculty of Informatics and Information Technologies, Slovak Tech |
Tamajka, Martin | Faculty of Informatics and Information Technologies, Slovak Univ |
Keywords: Ultrasound, Machine learning, Image segmentation
Abstract: Segmentation in two-dimensional ultrasound images is no easy task for human radiologists, mostly due to presence of a noise. From various neural network architectures, U-Net combined with inception module provides best results for ultrasound nerve segmentation. In our work we propose a simplified Inception U-Net architecture, achieving dice score of 0.77 on a set of images that contain Brachial Plexus.
|
|
WeP4O-10 Poster Session, Foyer |
Add to My Program |
Bioimaging II (Abstracts) |
|
|
|
10:30-11:30, Paper WeP4O-10.1 | Add to My Program |
Semi-Supervised Deep Learning for Super-Resolution Microscopy |
Kim, Jeongsol | Korea Advanced Institute of Science and Technology |
Lim, Sungjun | Texas A&M University |
Ye, Jong Chul | Korea Advanced Inst of Science & Tech |
Keywords: Microscopy - Super-resolution, Cells & molecules, Machine learning
Abstract: We propose an accurate and fast deep learning approach for super resolution microscopy method. To deal with the lack of the ground truth, we utilize a convolutional neural network (CNN) architecture to learn an end-to-end mapping between raw fluorescence microscopy image data and corresponding super-resolution data generated by FALCON. The method achieves the state-of-art reconstruction results compared to other existing super-resolution microscopy algorithms.
|
|
10:30-11:30, Paper WeP4O-10.2 | Add to My Program |
Data Augmentation Using Image Analogies for Adipocyte Image Segmentation |
Akazawa, Hideki | Osaka University |
Watanabe, Seiryo | Osaka University |
Shigeta, Hironori | Osaka University |
Mashita, Tomohiro | Osaka University |
Goto, Tsuyoshi | Kyoto University |
Kawada, Teruo | Kyoto University |
Seno, Shigeto | Osaka University |
Matsuda, Hideo | Osaka University |
Keywords: Image segmentation, Machine learning, Quantification and estimation
Abstract: Cell segmentation is a fundamental task for quantitative phenotyping, supervised learning methods based on convolutional neural network are widely used. However, it is often difficult to prepare a sufficient amount of correctly labeled data. In this research, we propose a data augmentation method using Image Analogies. The effectiveness of the proposed method is shown by experiments with general data augmentation methods.
|
|
10:30-11:30, Paper WeP4O-10.3 | Add to My Program |
Neuronal Structure Segmentation in Serial Electron Microscopy Images Using Semi-Supervised Learning Framework |
Takaya, Eichi | Keio University |
Takeichi, Yusuke | Kobe University |
Ozaki, Mamiko | Kobe University |
Kurihara, Satoshi | Keio University |
Keywords: Image segmentation, Machine learning
Abstract: In the research field called connectomics, it is aimed to investigate the structure and connection of the neural system in the brain and sensory organ of the living things. Earlier studies have been proposed the method to help experts who suffer from labeling electron microscopy (EM) images for three-dimensional reconstruction, that is important process to observe tiny neuronal structures in detail. However, most of existing methods are based on supervised learning, that needs large amount of labeled dataset, whereas the number of labeled EM images is limited. To tackle this problem,we proposed semi-supervised learning method, that performs pseudo-labeling. This makes it possible to automatically segment neuronal regions using only a small amount of labeled data. We experimented with the dataset of ISBI 2012 EM Segmentation Challenge, and showed that our method outperformed normal supervised learning with a few labeled samples, while the accuracy was not sufficient yet. We also applied our method to another dataset, and the efficiency of pro-posed method was shown.
|
|
10:30-11:30, Paper WeP4O-10.4 | Add to My Program |
Deep-Shift Phase Contrast Cell Detection and Tracking |
Debeir, Olivier | Université Libre De Bruxelles |
Almasri, Feras | Université Libre De Bruxelles |
Decaestecker, Christine | Université Libre De Bruxelles |
Keywords: Machine learning, In-vivo cellular and molecular imaging, Tracking (time series analysis)
Abstract: Deep regression can provide a robust kernel for tracking cell in phase contrast images. Current tracking approaches can be divided into tracking by model (e.g. mean shift) and tracking by combining segmentation and assignation. The mean shift algorithm is a kernel-based iterative method that detects the local mode of a distribution. When used in 2D image it can provide a simple blob tracking, which is used for in vitro fluorescence cell tracking. The mean shift algorithm is extended for phase contrast imaging with more complex kernels. We suggest to use deep regression to build an optimal 'deep-shift' kernel for cell detection and tracking method for phase contrast images. A total of 261 cell centroids distributed in 5 frames(extracted from a 540 frames sequence) are tagged by a human expert. Data augmentation produces 160k samples that train a Deep regression network to predict the optimal kernel. The kernel predict, for any 64x64 image patch, the best (x,y) shift vector toward the closest cell centroid. We show that the model build is able to efficiently detect cells in unseen phase contrast images with a detection rate of 98.75%. The same model can be used for tracking cells in an image sequences.
|
|
WeP4O-11 Poster Session, Foyer |
Add to My Program |
Optical Image Analysis (Abstracts) |
|
|
|
10:30-11:30, Paper WeP4O-11.1 | Add to My Program |
The Design of an Ingredient-Based Food Calorie Estimation System |
Turmchokkasam, Sirichai | Bangkok University |
Chamnongthai, Kosin | King Mongkuts University of Technology Thonburi |
Keywords: Thermal imaging, Computational Imaging, Pattern recognition and classification
Abstract: This paper proposes a method of ingredient-based food calorie estimation using nutrition knowledge and thermal information. In this method, an image of the food is first recognized as a type of food, and ingredients of the recognized food are retrieved from the database with their nutrition knowledge and pattern of brightness and thermal images. Simultaneously, the image is segmented into boundaries of ingredient candidates, and all boundaries are then classified into ingredients using fuzzy logic based on their heat pattern and intensities. The classified ingredients from all boundaries are finally calculated for total calories based on area ratio and nutrition knowledge.
|
|
10:30-11:30, Paper WeP4O-11.2 | Add to My Program |
Video-Based Discomfort Monitoring for Premature Infants in Nicu |
Sun, Yue | Eindhoven University of Technology |
Kommers, Deedee | Maxima Medical Center, Veldhoven; Eindhoven University of Techno |
Wang, Wenjin | Philips Research |
Joshi, Rohan | Philips Research |
Shan, Caifeng | Philips Research |
Tan, Tao | Eindhoven University of Technology |
Aarts, Ronald M. | Philips |
van Pul, Carola | Maxima Medical Center |
Andriessen, Peter | Maxima Medical Center |
de With, Peter | Eindhoven University of Technology |
Keywords: Classification, Computer-aided detection and diagnosis (CAD), Tracking (time series analysis)
Abstract: Frequent pain and discomfort in premature infants can lead to abnormal development, yielding long-term adverse neurodevelopmental outcomes. Video-based monitoring has been considered to be a promising contactless method for identification of discomfort moments. In this study, we propose a video-based method for automated detection of infant discomfort. The method is based on analyzing the infant motion behavior. Therefore, motion trajectories are estimated from frame to frame using optical flow. For each video segment, we further calculate the motion acceleration rate and extract 18 time- and frequency-domain features characterizing motion patterns. A support vector machine (SVM) classifier is then applied to video sequences to recognize infant status of comfort or discomfort. The method is evaluated using 183 video segments for 11 infants from 17 heel prick events. Experimental results show an AUC of 0.94 for discomfort detection and the average accuracy of combining all proposed features, which is promising for clinical use.
|
|
10:30-11:30, Paper WeP4O-11.3 | Add to My Program |
Polar Transformer Network for Glaucoma Screening |
Rosario, Sean | Petuum, Inc |
Dong, Nanqing | Petuum Inc |
Wang, Zeya | Petuum, Inc |
Liu, Zewei | Petuum, Inc |
Xing, Eric | Petuum, Inc |
Keywords: Computational Imaging, Eye, Retinal imaging
Abstract: Glaucoma is a disease that causes irrecoverable blindness, where early screening enables better treatment options for patients. In this work, we utilize the polar transformer network(PTN), an end-to-end learning framework, for glaucoma screening and achieve promising results for clinical diagnosis. The work can be further extended to other biomedical image analyses where the object of interest is circular-like.
|
|
10:30-11:30, Paper WeP4O-11.4 | Add to My Program |
Ensemble of Convolutional Neural Networks for Glaucoma Detection from Color Fundus Images Using Transfer Learning |
Gómez Valverde, Juan José | Universidad Politécnica De Madrid |
Anton, Alfonso | Universitat Internacional De Catalunya |
Santos, Andres | Universidad Politecnica Madrid |
Ledesma-Carbayo, Maria J. | Universidad Politécnica De Madrid |
Keywords: Computer-aided detection and diagnosis (CAD), Eye, Retinal imaging
Abstract: Glaucoma detection in color fundus images is a challenging task that requires expertise and years of practice. In this study we propose the application of a Convolutional Neural Networks (CNN) scheme that accomplished an AUC of 0.9761 in a dataset with 2313 images indicating that this solution can be a valuable option for the design of a computer-aided system for the detection of glaucoma in large-scale screening programs.
|
|
10:30-11:30, Paper WeP4O-11.5 | Add to My Program |
Longitudinal Registration of Infrared Thermal Image Based on 3d Scanning Surface for Skin and Soft Tissue Infection |
Shen, I-Ting | National Taiwan University |
Pan, Sung-Ching | National Taiwan University Hospital |
Wang, Hong-Siang | National Taiwan University |
Chen, Chung-Ming | National Taiwan University |
Keywords: Infrared imaging, Image registration
Abstract: Longitudinal follow up of the treatment outcome of skin and soft tissue infection (SSTI) is usually difficult and subjective due to lack of a quantitative index. Recently, we have shown that infrared (IR) thermal imaging has a great potential to longitudinally quantify the treatment response of SSTI. Nevertheless, longitudinal quantification of the same region of interest is intrinsically a hard problem because no anatomical or external landmarks are available in general. To make IR imaging feasible for assessment of SSTI treatment outcomes, a longitudinal registration approach using 3D scanning surfaces as the transformation media was proposed in this paper.
|
|
10:30-11:30, Paper WeP4O-11.6 | Add to My Program |
Retinal Lesions Segmentation Using CNNs and Adversarial Training |
Gullon, Natalia | Universitat Politecnica De Catalunya |
Vilaplana, Veronica | Universitat Politecnica De Catalunya |
Keywords: Retinal imaging, Image segmentation, Computer-aided detection and diagnosis (CAD)
Abstract: Diabetic retinopathy (DR) is an eye disease associated with diabetes mellitus that affects retinal blood vessels. Early detection is crucial to prevent vision loss. The most common method for detecting the disease is the analysis of digital fundus images, which show lesions of small vessels and functional abnormalities. Manual detection and segmentation of lesions is a time-consuming task requiring proficient skills. Automatic methods for retinal image analysis could help ophthalmologists in large-scale screening programs of population with diabetes mellitus allowing cost-effective and accurate diagnosis. In this work we propose a fully convolutional neural network with adversarial training to automatically segment DR lesions in funduscopy images.
|
|
10:30-11:30, Paper WeP4O-11.7 | Add to My Program |
Quantitative Imaging Enables At-Home Assessment of Infants’ Flat Head Syndrome from Head Photographs |
Aalamifar, Fereshteh | PediaMetrix |
Hezaveh, Seyed Hossein | PediaMetrix |
Keating, Robert | Children's National Health System |
Linguraru, Marius George | Children's National Health System |
|
|
10:30-11:30, Paper WeP4O-11.9 | Add to My Program |
An Evaluation of Segmentation and Classification Strategies on Melanomas |
Velez Nunez, Paulina | University of Seville |
Serrano, Carmen | Universidad De Sevillla |
Acha, Begoña | Universidad De Sevilla |
Keywords: Computer-aided detection and diagnosis (CAD)
Abstract: According to AiM at Melanoma Foundation, Melanoma represents the 1% out of all types of cancer, having the highest mortality rate. Therefore, early melanoma detection is crucial for increasing the rate of survival. Because of this, a classification method is proposed. First, a pre-segmentation of the lesion is tried out to improve the classification. Then two typical strategies based on Convolutional Neural Networks (CNN) are analyzed and compared to classify melanoma versus non-melanoma. The segmentation, based on SegNet, attained DICE and Jaccard coefficients equal to 0.7530 and 0.6350. The best classification results yield to an AUC=0.7978, achieved with a Vgg-16 with 4096-100-2 neurons in the last three layers. The Databases used for training was ISIC 2017 and ISIC 2018.
|
|
WeS41 Special Session, Venetian Ballroom A |
Add to My Program |
Pediatric Brain Imaging |
|
|
Chair: Coulon, Olivier | Aix-Marseille University |
Co-Chair: Rajagopalan, Vidya | UCSF |
Organizer: Coulon, Olivier | Aix-Marseille University |
Organizer: Auzias, Guillaume | Aix Marseille Univ, CNRS |
Organizer: Rajagopalan, Vidya | UCSF |
|
11:30-11:45, Paper WeS41.1 | Add to My Program |
Unraveling the Preterm Infants Brain Structure and Function (I) |
Hüppi, Petra | Geneva University Hospitals |
Keywords: fMRI analysis, Connectivity analysis, Computational Imaging
Abstract: Unraveling the preterm infants brain structure and function Petra S Hüppi, Division of Development and Growth, Dept of Pediatrics University Hospital of Geneva, Geneva The foundations of our human specific skills (cognition, language, social and emotional competence) are established during early brain development. How early life events such as prematurity shape our brain and the ultimate brain functioning has remained a topic of high interest and the development of the MRI scanner as well as computational image processing tools have revolutionized the way to study the brain of newborns and children. 1 This opens the question if this structural maturational delay has functional consequences and if it might be influenced by specific interventions, f.e. by multisensory stimuli as in music. Primary sensory cortical responses have been defined by task based fMRI in newborns for visual, auditory and recently olfactory stimuli 2,3. Complex auditory function using psychophysiological interaction (PPI) analysis of fMRI data after a music intervention in preterm infants showed full functional cortical processing of original and tempo modified music 4. Development of neural networks in the perinatal period is highly dependent on the multisensory activity driving maturation of neuronal circuits. In a recent resting-state fMRI study, we characterized a circuitry of interest consisting of three network modules interconnected by the salience network that displays reduced network coupling in preterm compared to full-term newborns. Interestingly, preterm infants exposed to music have significantly increased coupling between these brain networks. MRI’s different modalities have an unprecedented way allowed pediatricians and scientist interested in the early development to have a window into the developing brain and have discovered its early structural complexitiy as well as its early functional competence. 1. Gui L, Loukas S, Lazeyras F, Huppi PS, Meskaldji DE, Borradori Tolsa C. Longitudinal study of neonatal brain tissue volumes in preterm infants and their ability to predict neurodevelopmental outcome. Neuroimage 2019; 185: 728-41. 2. Seghier ML, Huppi PS. The role of functional magnetic resonance imaging in the study of brain development, injury, and recovery in the newborn. Semin Perinatol 2010; 34(1): 79-86. 3. Adam-Darque A, Grouiller F, Vasung L, et al. fMRI-bas
|
|
11:45-12:00, Paper WeS41.2 | Add to My Program |
Automated Neonatal Diffusion Mri Data Processing to Study White Matter Development (I) |
Bastiani, Matteo | University of Nottingham |
Keywords: Diffusion weighted imaging, Brain, Tractography
Abstract: Diffusion MRI is a powerful technique to probe brain connections and microstructure in vivo and non-invasively. However, given the significant structural changes that occur in the neonatal brain, it is challenging to build a standardised atlas of white matter connections. In this work, we present a fully automated processing pipeline and a quality control framework that allow to efficiently analyse in vivo neonatal data. Extensions to the proposed processing framework will allow to automatically extract reliable structural connectivity fingerprints and neurophenotypes that, when linked to genetics and behaviour, will improve our understanding on how structural development influences each individual.
|
|
12:00-12:15, Paper WeS41.3 | Add to My Program |
Imaging the Early Development of the Brain Cortex in Infants (I) |
Dubois, Jessica | INSERM-CEA I2BM Neurospin |
Lefevre, Julien | Institut De Neurosciences De La Timone |
Leroy, François | INSERM-CEA I2BM Neurospin |
Germanaud, David | APHP, Hôpital Robert Debré |
Lebenberg, Jessica | CEA I2BM Neurospin |
Poupon, Cyril | CEA I2BM NeuroSpin |
Dehaene-Lambertz, Ghislaine | CEA I2BM Neurospin |
Hertz-Pannier, Lucie | Neurospin, CEA |
Benders, Manon J N L | University Medical Center Utrecht |
Hüppi, Petra | Geneva University Hospitals |
Mangin, Jean-François | CEA I2BM NeuroSpin |
Keywords: Brain, Magnetic resonance imaging (MRI), Quantification and estimation
Abstract: Studying how the baby’s brain develops is essential to understand the human cognitive specificities and explore the complexity of neurodevelopmental disorders. Non-invasive neuroimaging approaches such as magnetic resonance imaging (MRI) are used to relate the structural and functional development of the brain in vivo. Nevertheless, imaging infants leads to several constraints in data acquisition and post-processing, because of issues related to motion artefacts, brain size and image contrast. Analyzing MRI data (e.g. anatomical, diffusion or relaxometry images) thus requires implementing dedicated methodologies to provide accurate and relevant information on the developing brain networks. Among the complex and intermingled mechanisms that take place during the pre-term and early post-term periods, this talk will focus on the early development of brain cortex in preterm newborns and infants. Discussing current challenges, we will describe analyses showing the progression of cortical folding and the changes in microstructure with age.
|
|
12:15-12:30, Paper WeS41.4 | Add to My Program |
Modelling Structural and Functional Brain Development in Utero (I) |
Langs, Georg | Medical University Vienna |
Schwartz, Ernst | Medical University Vienna |
Licandro, Roxane | Medical University of Vienna |
Taymourtash, Athena | Medical University of Vienna |
Sobotka, Daniel | Medical University of Vienna |
Kasprian, Gregor | Medical University of Vienna |
Jakab, Andras | University Children's Hospital Zurich |
Prayer, Daniela | Medical University of Vienna |
|
|
WeS42 Oral Session, Venetian Ballroom B |
Add to My Program |
Lung Image Analysis |
|
|
Chair: Angelini, Elsa | Imperial NIHR BRC, Imperial College London |
Co-Chair: Reinhardt, Joseph M. | The University of Iowa |
|
11:30-11:45, Paper WeS42.1 | Add to My Program |
Class-Aware Adversarial Lung Nodule Synthesis in CT Images |
Yang, Jie | Columbia University |
Liu, Siqi | Siemens Healthineers |
Grbic, Sasa | TU Munich/ Siemens Corporate Research |
Setio, Arnaud Arindra Adiyoso | Siemens Healthineers |
Xu, Zhoubing | Vanderbilt University |
Gibson, Eli | University College London |
Chabin, Guillaume | Siemens Healthineers |
Georgescu, Bogdan | Siemens Corporation, Corporate Technology |
Laine, Andrew F. | Columbia University |
Comaniciu, Dorin | Siemens Corporate Research |
Keywords: Lung, Image synthesis, Machine learning
Abstract: Though large-scale datasets are essential for training deep learning systems, it is expensive to scale up the collection of medical imaging datasets. Synthesizing the objects of interests, such as lung nodules, in medical images based on the distribution of annotated datasets can be helpful for improving the supervised learning tasks, especially when the datasets are limited by size and label balance. In this paper, we propose the class-aware adversarial synthesis framework to synthesize lung nodules in CT images. The framework is built with a coarse-to-fine patch in-painter (generator) and two class-aware discriminators. By conditioning on the random latent variables and the target nodule labels, the trained networks are able to generate diverse nodules given the same context. By evaluating on the public LIDC-IDRI dataset, we demonstrate an example application of the proposed framework for improving the accuracy of the lung nodule malignancy estimation as a binary classification problem, which is important in the lung screening scenario. We show that combining the real image patches and the synthetic lung nodules in the training set can improve the mean AUC classification score across different network architectures by 2%.
|
|
11:45-12:00, Paper WeS42.2 | Add to My Program |
Automated Segmentation of Pulmonary Lobes Using Coordination-Guided Deep Neural Networks |
Wang, Wenjia | Peking University |
Chen, Junxuan | Alibaba Group |
Zhao, Jie | Peking University |
Chi, Ying | Alibaba Group |
Xie, Xuansong | Alibaba Cloud |
Zhang, Li | Peking University |
Hua, Xian-Sheng | Alibaba Group |
Keywords: Image segmentation, Lung, Computed tomography (CT)
Abstract: The identification of pulmonary lobes is of great importance in disease diagnosis and treatment. A few lung diseases have regional disorders at lobar level. Thus, an accurate segmentation of pulmonary lobes is necessary. In this work, we propose an automated segmentation of pulmonary lobes using coordination-guided deep neural networks from chest CT images. We first employ an automated lung segmentation to extract the lung area from CT image, then exploit volumetric convolutional neural network (V-net) for segmenting the pulmonary lobes. To reduce the misclassification of different lobes, we therefore adopt coordination-guided convolutional layers (CoordConvs) that generate additional feature maps of the positional information of pulmonary lobes. The proposed model is trained and evaluated on a few publicly available datasets and has achieved the state-of-the-art accuracy with a mean Dice coefficient index of 0.947 pm 0.044.
|
|
12:00-12:15, Paper WeS42.3 | Add to My Program |
Abnormal Chest X-Ray Identification with Generative Adversarial One-Class Classifier |
Tang, Yuxing | National Institutes of Health |
Tang, Youbao | National Institutes of Health |
Han, Mei | Ping an Technology, US Research Labs |
Xiao, Jing | Ping an Technology Co., Ltd., |
Summers, Ronald | National Institutes of Health Clinical Center |
Keywords: Computer-aided detection and diagnosis (CAD), Classification, Machine learning
Abstract: Being one of the most common diagnostic imaging tests, chest radiography requires timely reporting of potential findings after image acquisition. In this paper, we propose an end-to-end architecture for abnormal chest X-ray identification using generative adversarial one-class learning. Unlike previous approaches, our method takes only normal chest X-ray images as input. The architecture is composed of three deep neural networks, each of which learned by competing while collaborating among them to model the underlying content structure of the normal chest X-rays. Given a chest X-ray image in the testing phase, if it is normal, the learned architecture can well model and reconstruct the content; if it is abnormal, since the content is unseen in the training phase, the model would perform poorly in its reconstruction. It thus enables distinguishing abnormal chest X-rays from normal ones. Quantitative and qualitative experiments demonstrate the effectiveness and efficiency of our approach, where an AUC of 0.841 is achieved on the challenging NIH Chest X-ray dataset in a one-class learning setting, with the potential in reducing the workload for radiologists.
|
|
12:15-12:30, Paper WeS42.4 | Add to My Program |
When Does Bone Suppression and Lung Field Segmentation Improve Chest X-Ray Disease Classification? |
Baltruschat, Ivo Matteo | University Medical Center Hamburg-Eppendorf |
Steinmeister, Leonhard | University Medical Center Hamburg-Eppendorf |
Ittrich, Harald | University Medical Center Hamburg-Eppendorf |
Adam, Gerhard | University Medical Center Hamburg-Eppendorf |
Nickisch, Hannes | Philips Research, Hamburg, Germany |
Saalbach, Axel | Philips GmbH, Innovative Technologies |
von Berg, Jens | Philips Research Hamburg |
Grass, Michael | Philips Research, Hamburg |
Knopp, Tobias | University Medical Center Hamburg-Eppendorf |
Keywords: X-ray imaging, Lung, Classification
Abstract: Chest radiography is the most common clinical examination type. To improve the quality of patient care and to reduce workload, methods for automatic pathology classification have been developed. In this contribution we investigate the usefulness of two advanced image pre-processing techniques, initially developed for image reading by radiologists, for the performance of Deep Learning methods. First, we use bone suppression, an algorithm to artificially remove the rib cage. Secondly, we employ an automatic lung field segmentation to crop the image to the lung area. Furthermore, we consider the combination of both in the context of an ensemble approach. In a five-times re-sampling scheme, we use Receiver Operating Characteristic (ROC) statistics to evaluate the effect of the pre-processing approaches. Using a Convolutional Neural Network (CNN), optimized for X-ray analysis, we achieve a good performance with respect to all pathologies on average. Superior results are obtained for selected pathologies when using pre-processing, i.e. for mass the area under the ROC curve increased by 9.95%. The ensemble with pre-processed trained models yields the best overall results.
|
|
12:30-12:45, Paper WeS42.5 | Add to My Program |
Biphasic Model of Lung Deformations for Video-Assisted Thoracoscopic Surgery (VATS) |
Alvarez, Pablo | Université De Rennes 1, Laboratoire Traitement Du Signal Et De L |
Narasimhan, Saramati | Vanderbilt University |
Rouzé, Simon | Univ. Rennes 1, CHU, LTSI - UMR 1099 |
Dillenseger, Jean-Louis | Université De Rennes 1 |
Payan, Yohan | Laboratoire TIMC-IMAG |
Miga, Michael | Vanderbilt University |
Chabanas, Matthieu | Univ. Grenoble Alpes, Grenoble Institute of Technology |
Keywords: Image registration, Surgical guidance/navigation, Lung
Abstract: Intraoperative localization of small, low-density or deep lung nodules during Video-Assisted Thoracoscopic Surgery (VATS) is a challenging task. Localization techniques used in current practice require an additional preoperative procedure that adds complexity to the intervention and might yield to clinical complications. Therefore, clinical practice may benefit from alternative, intraoperative localization methods. We propose a nonrigid registration approach for nodule localization. Our method is based on a biomechanical model of the lung, where the lung parenchyma is represented as a biphasic medium. Preliminary results are promising, with target registration errors reduced from 28.39 mm to 9.86 mm in median, and to 3.68 mm for the nodule in particular.
|
|
WeS43 Oral Session, Venetian Ballroom C |
Add to My Program |
Microscopy Reconstruction and Image Quality |
|
|
Chair: Walter, Thomas | Institut Curie, Mines ParisTech |
Co-Chair: Sheet, Debdoot | Indian Institute of Technology Kharagpur |
|
11:30-11:45, Paper WeS43.1 | Add to My Program |
Robust Super-Resolution Gan, with Manifold-Based and Perception Loss |
Upadhyay, Uddeshya | Indian Institute of Technology (IIT) Bombay |
Awate, Suyash P | Indian Institute of Technology (IIT), Bombay |
Keywords: Microscopy - Super-resolution, Machine learning, Probabilistic and statistical models & methods
Abstract: Super-resolution using deep neural networks typically relies on highly curated training sets that are often unavailable in clinical deployment scenarios. The use loss functions assuming Gaussian-distributed residuals, making the learning very sensitive to (even a small quantity of) corruptions inherent in clinical training sets. We propose novel loss functions that are robust to corruptions in training sets by modeling heavy-tailed non-Gaussian distributions on the residuals. We also propose a loss based on an autoencoder-based manifold-distance between the super-resolved and high-resolution images, to reproduce realistic textural content in super-resolved images. We also propose to learn to super-resolve images to match human perceptions of structure, luminance, and con- trast. Results on a large clinical dataset shows the advantages of each of our contributions, where our framework outperforms the state of the art quantitatively and qualitatively.
|
|
11:45-12:00, Paper WeS43.2 | Add to My Program |
New Methods for L_2-L_0 Minimization and Their Applications to 2D Single-Molecule Localization Microscopy |
Bechensteen, Arne | Université Côte d'Azur, INRIA, Laboratoire I3S UMR 7271 |
Blanc-Feraud, Laure | Université Nice Sophia Antipolis, Laboratoire I3S, CNRS, INRIA |
Aubert, Gilles | Laboratoire J.A Dieudonne, UMR 6621 CNRS/UNSA, |
Keywords: Microscopy - Super-resolution, In-vivo cellular and molecular imaging, Inverse methods
Abstract: We present in this paper a biconvex reformulation of a ell_2-ell_0 problem composed of a least-square data term plus a sparsity term introduced as a constraint or a penalization. Minimization algorithms are derived and compared with the state of the art in ell_2-ell_0 minimization by relaxation or deep learning. Application results are shown on Single-Molecule Localization Microscopy.
|
|
12:00-12:15, Paper WeS43.3 | Add to My Program |
Multi-Spectral Widefield Microscopy of the Beating Heart through Post-Acquisition Synchronization and Unmixing |
Jaques, Christian | Idiap Research Institute |
Bapst-Wicht, Linda | Institut De Recherche En Ophtalmologie |
Schorderet, Daniel Francis | Institut De Recherche En Ophtalmologie |
Liebling, Michael | Idiap Research Institute and UC Santa Barbara |
Keywords: Microscopy - Super-resolution, Computational Imaging, Inverse methods
Abstract: Multi-spectral imaging allows distinguishing biological structures. For cardiac microscopy, available devices are either too slow or require illumination intensities that are detrimental to the sample. We present a method for spectral super-resolution imaging of samples whose motion is quasi-periodic by sequentially acquiring movies in wave length ranges with filters of overlapping bands. Following an initial calibration procedure, we synchronize and unmix the movies to produce multi-spectral sequences. We characterized our approach to retrieve the transmittance of a colored microscopic target whose motion we controlled, observing measurements within of 10% that of a reference spectrometer. We further illustrate our approach to observe the beating embryonic zebrafish heart, demonstrating new possibilities for studying its development.
|
|
12:15-12:30, Paper WeS43.4 | Add to My Program |
A Deep Learning Approach to Identify mRNA Localization Patterns |
Dubois, Rémi | Mines ParisTech, PSL Research University, CBIO - Centre for Comp |
Imbert, Arthur | Mines ParisTech |
Samacoits, Aubin | (Pierre Et Marie Curie University, Pasteur Institute) |
Peter, Marion | Institut De Génétique Moléculaire De Montpellier, University Of |
Bertrand, Edouard | Igmm - Cnrs Umr5535 |
Mueller, Florian | Institut Pasteur |
Walter, Thomas | Institut Curie, Mines ParisTech |
Keywords: High-content (high-throughput) screening, Classification, Microscopy - Light, Confocal, Fluorescence
Abstract: The localization of messenger RNA (mRNA) molecules inside cells play an important role for the local control of gene expression. However, the localization patterns of many mRNAs remain unknown and poorly understood. Single Molecule Fluorescence in Situ Hybridization (smFISH) allows for the visualization of individual mRNA molecules in cells. This method is now scalable and can be applied in High Content Screening (HCS) mode. Here, we propose a computational workflow based on deep convolutional neural networks trained on simulated data to identify different localization patterns from large-scale smFISH data.
|
|
12:30-12:45, Paper WeS43.5 | Add to My Program |
Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution |
Tom, Francis | Indian Institute of Technology Kharagpur |
Sharma, Himanshu | SigTuple Technologies Private Limited |
Mundhra, Dheeraj | SigTuple Technologies Private Limited |
Rai Dastidar, Tathagato | SigTuple Technologies Pvt Ltd |
Sheet, Debdoot | Indian Institute of Technology Kharagpur |
Keywords: Machine learning, Microscopy - Super-resolution, Histopathology imaging (e.g. whole slide imaging)
Abstract: Adversarially trained deep neural networks have significantly improved performance of single image super resolution, by hallucinating photorealistic local textures, thereby greatly reducing the perception difference between a real high resolution image and its super resolved (SR) counterpart. However, application to medical imaging requires preservation of diagnostically relevant features while refraining from introducing any diagnostically confusing artifacts. We propose using a deep convolutional super resolution network (SRNet) trained for (i) minimising reconstruction loss between the real and SR images, and (ii) maximally confusing learned relativistic visual Turing test (rVTT) networks to discriminate between (a) pair of real and SR images (T1) and (b) pair of patches in real and SR selected from region of interest (T2). The adversarial loss of T1 and T2 while backpropagated through SRNet helps it learn to reconstruct pathorealism in the regions of interest such as white blood cells (WBC) in peripheral blood smears or epithelial cells in histopathology of cancerous biopsy tissues, which are experimentally demonstrated here. Experiments performed for measuring signal distortion loss using peak signal to noise ratio (pSNR) and structural similarity (SSIM) with variation of SR scale factors, impact of rVTT adversarial losses, and impact on reporting using SR on a commercially available artificial intelligence (AI) digital pathology system substantiate our claims.
|
|
WeS44 Oral Session, Venetian Ballroom DE |
Add to My Program |
Data Integration and Fusion |
|
|
Chair: Vercauteren, Tom | King's College London |
Co-Chair: Serrano, Carmen | Universidad De Sevillla |
|
11:30-11:45, Paper WeS44.1 | Add to My Program |
Off-The-Grid Model Based Deep Learning (O-MoDL) |
Pramanik, Aniket | University of Iowa |
Aggarwal, Hemant Kumar | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Machine learning, Image reconstruction - analytical & iterative methods
Abstract: We introduce a model based off-the grid image reconstruction algorithm using deep learned priors. The main difference of the proposed scheme with current deep learning strategies is the learning of non-linear annihilation relations in Fourier space. We rely on a model based framework, which allows us to use a significantly smaller deep network, compared to direct approaches that also learn how to invert the forward model. Preliminary comparisons against image domain MoDL approach demonstrates the potential of the off-the-grid formulation. The main benefit of the proposed scheme compared to structured low-rank methods is the quite significant reduction in computational complexity.
|
|
11:45-12:00, Paper WeS44.2 | Add to My Program |
The Continuous Registration Challenge: Evaluation-As-A-Service for Medical Image Registration Algorithms |
Marstal, Kasper | Erasmus Medical Center |
Berendsen, Floris F. | Leiden Univerisity Medical Center |
Dekker, Niels | Leiden University Medical Center |
Staring, Marius | LUMC |
Klein, Stefan | Erasmus MC |
Keywords: Image registration, Magnetic resonance imaging (MRI), Computed tomography (CT)
Abstract: We have developed an open source, collaborative platform for researchers to develop, compare, and improve medical image registration algorithms. The platform handles data management, unit testing, and benchmarking of registration methods in a fully automatic fashion. In this paper we describe the platform and present the Continuous Registration Challenge. The challenge focuses on registration of lung CT and brain MR images and includes eight publicly available data sets. The platform is made available to the community as an open source project and can be used for organization of future challenges.
|
|
12:00-12:15, Paper WeS44.3 | Add to My Program |
Network Regularization in Imaging Genetics Improves Prediction Performances and Model Interpretability on Alzheimer's Disease |
Guigui, Nicolas | Neurospin, CEA - Université Paris Saclay |
Philippe, Cathy | CEA, Universite Paris-Saclay |
Gloaguen, Arnaud | CentraleSupélec - Neurospin, CEA - Université Paris-Saclay |
Karkar, Slim | CEA - Universite Paris-Saclay |
Guillemot, Vincent | Institut Pasteur |
Löfstedt, Tommy | Department of Radiation Sciences, Umeå University, Umeå, Sweden |
Frouin, Vincent | UNATI, Neurospin, CEA, Universite Paris-Saclay |
Keywords: Integration of multiscale information, Genes, Graphical models & methods
Abstract: Imaging genetics is a growing popular research avenue which aims to find genetic variants associated with quantitative phenotypes that characterize a disease. In this work, we combine structural MRI with genetic data structured by prior knowledge of interactions in a Canonical Correlation Anal- ysis (CCA) model with graph regularization. This results in improved prediction performance and yields a more inter- pretable model.
|
|
12:15-12:30, Paper WeS44.4 | Add to My Program |
A Multi-Stage Framework with Context Information Fusion Structure for Skin Lesion Segmentation |
Tang, Yujiao | Southern Medical University |
Yang, Feng | Southern Medical University |
Yuan, Shaofeng | Southern Medical University |
Zhan, Chang'an | Southern Medical University |
Keywords: Machine learning, Image segmentation, Skin
Abstract: The computer-aided diagnosis (CAD) systems can highly improve the reliability and efficiency of melanoma recognition. As a crucial step of CAD, skin lesion segmentation has the unsatisfactory accuracy in existing methods due to large variability in lesion appearance and artifacts. In this work, we propose a framework employing multi-stage UNets (MS-UNet) in the auto-context scheme to segment skin lesion accurately end-to-end. We apply two approaches to boost the performance of MS-UNet. First, UNet is coupled with a context information fusion structure (CIFS) to integrate the low-level and context information in the multi-scale feature space. Second, to alleviate the gradient vanishing problem, we use deep supervision mechanism through supervising MS-UNet by minimizing a weighted Jaccard distance loss function. Three out of five commonly used performance metrics, including Jaccard index and Dice coefficient, show that our approach outperforms the state-of-the-art deep learning based methods on the ISBI 2016 Skin Lesion Challenge dataset.
|
|
12:30-12:45, Paper WeS44.5 | Add to My Program |
Semi-Supervised Learning with Structured Knowledge for Body Hair Detection in Photoacoustic Image |
Kikkawa, Ryo | Kyushu University |
Sekiguchi, Hiroyuki | Kyoto University |
Tsuge, Itaru | Kyoto University |
Saito, Susumu | Kyoto University |
Bise, Ryoma | Kyushu University |
Keywords: Machine learning, Classification
Abstract: Photoacoustic (PA) imaging is a promising new imaging technology for non-invasively visualizing blood vessels inside biological tissues. In addition to blood vessels, body hairs are also visualized in PA imaging, and the body hair signals degrade the visibility of blood vessels. For learning a body hair classifier, the amount of real training and test data is limited, because PA imaging is a new modality. To address this problem, we propose a novel semi-supervised learning (SSL) method for extracting body hairs. The method effectively learns the discriminative model from small labeled training data and small unlabeled test data by introducing prior knowledge, of the orientation similarity among adjacent body hairs, into SSL. Experimental results using real PA data demonstrate that the proposed approach is effective for extracting body hairs as compared with several baseline methods.
|
|
WeS51 Special Session, Venetian Ballroom A |
Add to My Program |
Global Health: Imaging in Developing Countries |
|
|
Chair: Zuluaga, Maria | Universidad Nacional De Colombia |
Co-Chair: Khanal, Bishesh | King's College London |
Organizer: Zuluaga, Maria A. | EURECOM |
Organizer: Khanal, Bishesh | King's College London |
|
14:30-15:00, Paper WeS51.1 | Add to My Program |
Challenges in Diagnostic Radiology Services in Rural Areas: Can Medical Imaging Informatics and Machine Learning Provide Solutions ? (I) |
Pant, Bhaskar | HAMS |
Keywords: Computed tomography (CT), X-ray imaging, Machine learning
Abstract: CHALLENGES IN DIAGNOSTIC RADIOLOGY SERVICES IN RURAL AREAS: CAN MEDICAL IMAGING INFORMATICS AND MACHINE LEARNING PROVIDE SOLUTIONS ? Bhaskar Raj Pant POSSIBLE Hospitals, Nepal Applied Mathematics and Informatics Institute for Research, and HAMS Hospital ABSTRACT Diagnostic Radiology Services (DRS) are important in primary health care, obstetrical and surgery that are slowly expanding in world’s poorest regions. Running DRS in rural areas of LMICs has several challenges. X-rays and Ultrasound (US) are the predominant services that are practical to be installed in rural settings. These modalities mostly do not provide 3D imaging. Moreover, US image quality and consequently proper diagnosis is dependent on the availability of expert operator which is often more scarce than the US machine itself. We provide a context and challenges involved in running DRS in one of the most rural areas of Nepal and present examples where medical imaging informatics with machine learning (MICML) could have a huge impact. Index Terms— Rural diagnostic radiology, rural surgery, global health
|
|
15:00-15:30, Paper WeS51.2 | Add to My Program |
Tele-Pathology: A Use Case in Colombia (I) |
Alvarez, Charlems | Universidad Nacional De Colombia |
Corredor, Germán | Universidad Nacional De Colombia |
Giraldo, Diana Lorena | Universidad Nacional De Colombia |
Romero, Eduardo | Universidad Nacional De Colombia |
Keywords: Histopathology imaging (e.g. whole slide imaging), Tissue, Computational Imaging
Abstract: Colombia is a middle income country with a growing incidence of cancer. There exists an acute shortage of trained pathologists in Colombia, i.e., there are a total of 500 general pathologists for a total population of 44 million people. This lack of pathology expertise has resulted in misdiagnosis of this disease. This misinterpretation can lead patients to undergo unnecessary surgery and other treatments, or to miss out on treatments they do need. There is thus a clear lack of more dedicated cancer expertise and technology to improve and alleviate the disease burden caused by misdiagnosis. In this scenario, tele-pathology, the use of information technology and slide digitization (or digital pathology) for rendering remote diagnosis, may be a solution to improve the access to cancer pathology expertise and address the problem of having concentrated in large cities the cancer pathology expertise. This article presents a tele-pathology project in the context of a developing country and the different adapted solutions to overcome the lack of resources.
|
|
15:30-16:00, Paper WeS51.3 | Add to My Program |
Automatic Detection and Diagnosis of Sacroilitis in Ct Scans As Incidental Findings (I) |
Shenkman, Yigal | The Hebrew University of Jerusalem |
Qutteineh, Bilal | Hadassah Hebrew Univ. Medical Center, Jerusalem |
Joskowicz, Leo | The Hebrew University of Jerusalem |
Azraq, Yusef | Hadassah Hebrew Univ. Medical Center, Jerusalem |
Szeskin, Adi | The Hebrew University of Jerusalem |
Mayer, Arnaldo | Sheba Medical Center |
Eshed, Iris | Sheba Medical Center |
Keywords: Computed tomography (CT), Bone, Computer-aided detection and diagnosis (CAD)
Abstract: Early diagnosis of sacroiliitis may lead to preventive treatment which can significantly improve the patient's quality of life in the long run. Oftentimes, a CT scan of the lower back is acquired for suspected back pain. However, since the differences between a healthy and an inflamed sacroiliac joint in the early stages are subtle, the condition may be missed. We have developed a new automatic algorithm for the diagnosis and grading of sacroiliitis CT scans as incidental findings that is based on supervised machine and deep learning techniques. Experimental results on 242 cases yield a binary and a 3-class case classification accuracy of 92% and 81%, a sensitivity of 95% and 82%, and an Area-Under-the-Curve of 0.97 and 0.57, respectively. Automatic computer-based analysis of CT scans has the potential of being a useful method for the diagnosis and grading of sacroiliitis as an incidental finding.
|
|
WeS52 Oral Session, Venetian Ballroom B |
Add to My Program |
Eye Image Analysis |
|
|
Chair: Burlina, Philippe | Johns Hopkins University |
Co-Chair: Paviotti, Anna | University of Padova |
|
14:30-14:45, Paper WeS52.1 | Add to My Program |
Pixel Reconstruction for Speckle Reduction in 3D Optical Coherence Tomography of Retina |
Cheng, Jun | Institute of Biomedical Engineering, Chinese Academy of Sciences |
Zhao, Yitian | Chinese Academy of Sciences |
Hu, Yan | Chinese Academy of Sciences |
Liu, Jiang | Ningbo Institute of Materials Technology and Engineering, CAS |
Keywords: Image enhancement/restoration(noise and artifact reduction), Optical coherence tomography
Abstract: Speckle noise reduction in optical coherence tomography (OCT) is important for better visualization and analysis in retinal imaging. This paper proposes a novel pixel reconstruction based method to reduce speckle noise in 3D OCT of retina. In the proposed method, each pixel is estimated as the sum of a noise-free part and a noise part, which are solved by axial-scan based alignment and a low rank matrix completion algorithm. Evaluated in a data set of 20 volumes, results show that the proposed method effectively reduces the noise. The technology is beneficial and can be used in OCT machines.
|
|
14:45-15:00, Paper WeS52.2 | Add to My Program |
Motion Compensation in Digital Holography for Retinal Imaging |
Rivet, Julie | ESPCI Paris |
Tochon, Guillaume | LRDE |
Meimon, Serge | ONERA |
Pâques, Michel | CIC 503, Centre Hospitalier National Des XX-XV |
Géraud, Thierry | EPITA |
Atlan, Michael | Institut Langevin |
Keywords: Motion compensation and analysis, Holography, Optical coherence tomography
Abstract: The measurement of medical images can be hindered by blur and distortions caused by the physiological motion. Specially for retinal imaging, images are greatly affected by sharp movements of the eye. Stabilization methods have been developed and applied to state-of-the-art retinal imaging modalities; here we intend to adapt them for coherent light detection schemes. In this paper, we demonstrate experimentally cross-correlation-based lateral and axial motion compensation in laser Doppler imaging and optical coherence tomography by digital holography. Our methods improve lateral and axial image resolution in those innovative instruments and allow a better visualization during motion.
|
|
15:00-15:15, Paper WeS52.3 | Add to My Program |
Learning to Segment Corneal Tissue Interfaces in OCT Images |
Mathai, Tejas Sudharshan | Carnegie Mellon University |
Lathrop, Kira | University of Pittsburgh |
Galeotti, John | Carnegie Mellon University |
Keywords: Optical coherence tomography, Eye, Image segmentation
Abstract: Accurate and repeatable delineation of corneal tissue interfaces is necessary for surgical planning during anterior segment interventions, such as Keratoplasty. Designing an approach to identify interfaces, which generalizes to datasets acquired from different Optical Coherence Tomographic (OCT) scanners, is paramount. In this paper, we present a Convolutional Neural Network (CNN) based framework called CorNet that can accurately segment three corneal interfaces across datasets obtained with different scan settings from different OCT scanners. Extensive validation of the approach was conducted across all imaged datasets. To the best of our knowledge, this is the first deep learning based approach to segment both anterior and posterior corneal tissue interfaces. Our errors are 2x lower than non-proprietary state-of-the-art corneal tissue interface segmentation algorithms, which include image analysis-based and deep learning approaches.
|
|
15:15-15:30, Paper WeS52.4 | Add to My Program |
Topology-Preserving Shape-Based Regression of Retinal Layers in OCT Image Data Using Convolutional Neural Networks |
Kepp, Timo | Universität Zu Lübeck |
Ehrhardt, Jan | Universität Zu Lübeck |
Heinrich, Mattias | University of Lübeck, Germany |
Hüttmann, Gereon | Universität Zu Lübeck |
Handels, Heinz | University of Lübeck |
Keywords: Eye, Optical coherence tomography, Machine learning
Abstract: Optical coherence tomography (OCT) is a non-invasive imaging modality that provides cross-sectional 3D images of biological tissue. Especially in ophthalmology OCT is used for the diagnosis of various eye diseases. Automatic retinal layer segmentation algorithms, which are increasingly based on deep learning techniques, can support diagnostics. However, topology properties, such as the order of retinal layers, are often not considered. In our work, we present an automatic segmentation approach based on shape regression using convolutional neural networks (CNNs). Here, shapes are represented by signed distance maps (SDMs) that assign the distance to the next object contour to each pixel. Thus, spatial regularization is introduced and plausible segmentations can be produced. Our method is evaluated on a public OCT dataset and is compared with two classification-based approaches. The results show that our method has fewer outliers with comparable segmentation performance. In addition, it has an improved topology preservation, which saves further post-processing.
|
|
15:30-15:45, Paper WeS52.5 | Add to My Program |
U2-Net: A Bayesian U-Net Model with Epistemic Uncertainty Feedback for Photoreceptor Layer Segmentation in Pathological OCT Scans |
Orlando, José Ignacio | Medical University of Vienna |
Seeböck, Philipp | Medical University of Vienna |
Bogunovic, Hrvoje | Department of Ophthalmology, Medical University of Vienna |
Klimscha, Sophie | Department of Ophthalmology, Medical University of Vienna |
Grechenig, Christoph | Department of Ophthalmology, Medical University of Vienna |
Waldstein, Sebastian | Department of Ophthalmology, Medical University of Vienna |
Gerendas, Bianca S. | Department of Ophthalmology, Medical University of Vienna |
Schmidt-Erfurth, Ursula | Department of Ophthalmology, Medical University of Vienna |
Keywords: Retinal imaging, Eye, Image segmentation
Abstract: In this paper, we introduce a Bayesian deep learning based model for segmenting the photoreceptor layer in pathological OCT scans. Our architecture provides accurate segmentations of the photoreceptor layer and produces pixel-wise epistemic uncertainty maps that highlight potential areas of pathologies or segmentation errors. We empirically evaluated this approach in two sets of pathological OCT scans of patients with age-related macular degeneration, retinal vein oclussion and diabetic macular edema, improving the performance of the baseline U-Net both in terms of the Dice index and the area under the precision/recall curve. We also observed that the uncertainty estimates were inversely correlated with the model performance, underlying its utility for highlighting areas where manual inspection/correction might be needed.
|
|
WeS53 Oral Session, Venetian Ballroom C |
Add to My Program |
Xray/ CT Imaging and Reconstruction |
|
|
Chair: Fessler, Jeff | Univ. Michigan |
Co-Chair: Goksel, Orcun | ETH Zurich |
|
14:30-14:45, Paper WeS53.1 | Add to My Program |
Sparse-View CT Reconstruction Via Convolutional Sparse Coding |
Bao, Peng | Sichuan University |
Xia, Wenjun | Sichuan University |
Yang, Kang | College of Computer Science, Sichuan University, China |
Zhou, Jiliu | University |
Zhang, Yi | Sichuan University |
Keywords: Computed tomography (CT), Image reconstruction - analytical & iterative methods, Compressive sensing & sampling
Abstract: Traditional dictionary learning based CT reconstruction methods are patch-based and the features learned with these methods often contain shifted versions of the same features. To deal with these problems, the convolutional sparse coding (CSC) has been proposed and introduced into various applications. In this paper, inspired by the successful applications of CSC in the field of signal processing, we propose a novel sparse-view CT reconstruction method based on CSC with gradient regularization on feature maps. By directly working on the whole image, which need not to divide the image into overlapped patches like dictionary learning based methods, the proposed method can maintain more details and avoid the artifacts caused by patch aggregation. Experimental results demonstrate that the proposed method has better performance than several existing algorithms in both qualitative and quantitative aspects.
|
|
14:45-15:00, Paper WeS53.2 | Add to My Program |
A New Approach for Microcalcification Enhancement in Digital Breast Tomosynthesis Reconstruction |
Sghaier, Maissa | CVN, CentraleSupélec, Inria, Univ. Paris Saclay, France |
Chouzenoux, Emilie | Ligm - Cnrs |
Palma, Giovanni | GE Healthcare, Buc, France |
Pesquet, Jean-Christophe | CentraleSupélec, INRIA Saclay, University Paris Saclay |
Muller, Serge | GE Healthcare |
Keywords: Image reconstruction - analytical & iterative methods, Inverse methods, Breast
Abstract: We propose a novel approach aiming to improve the detectability of microcalcifications in Digital Breast Tomosynthesis (DBT) volumes. Hence, our contribution is twofold. First, we formulate the clinical task through a detectability function based on an approach inspired from mathematical model observers. Second, we integrate this new developed clinical-task term in a cost function which is minimized for 3D reconstruction of DBT volumes. Experimental results carried out on both phantom and real clinical data show that the proposed clinical term allows the visibility of microcalcifications to be significantly improved, while preserving an overall high quality of the fully reconstructed volume.
|
|
15:00-15:15, Paper WeS53.3 | Add to My Program |
A Convolutional Framework for Forward and Back-Projection in Fan-Beam Geometry |
Zhang, Kai | University of Florida |
Entezari, Alireza | University of Florida |
Keywords: X-ray imaging, Computed tomography (CT), Computational Imaging
Abstract: We present an approach for highly efficient and accurate computation of forward model for image reconstruction in fan-beam geometry in X-ray computed tomography. The efficiency of computations makes this approach suitable for large-scale optimization algorithms with on-the-fly, memory- less, computations of the forward and back-projection. Our experiments demonstrate the improvements in accuracy as well as efficiency of our model, specifically for first-order box splines (i.e., pixel-basis) compared to recently developed methods for this purpose, namely Look-up Table-based Ray Integration (LTRI) and Separable Footprints (SF) in 2-D.
|
|
15:15-15:30, Paper WeS53.4 | Add to My Program |
Spectral CT Reconstruction Via Self-Similarity in Image-Spectral Tensors |
Xia, Wenjun | Sichuan University |
Wu, Weiwen | Chongqing University |
Liu, Fenglin | Chongqing University |
Yu, Hengyong | University of Massachusetts Lowell |
Zhou, Jiliu | University |
Wang, Ge | Rensselaer Polytechnic Institute |
Zhang, Yi | Sichuan University |
Keywords: Image reconstruction - analytical & iterative methods, Computed tomography (CT), Compressive sensing & sampling
Abstract: Spectral computed tomography (CT) reconstructs multi-energy images from data in different energy bins. These reconstructed images can be contaminated by noise due to the limited numbers of photons in the corresponding energy bins. In this paper, we propose a spectral CT reconstruction method aided by self-similarity in image-spectral tensors (ASSIST), which utilizes the self-similarity of patches in both spatial and spectral domains. Patches with similar structures identified by a joint spatial and spectral searching strategy form a basic tensor unit, and can be utilized to improve image quality. Specifically, each tensor is decomposed into a low-rank component and a sparse component, which respectively represent the stable structures and feature differences across different energy bins. The experimental results demonstrate that the proposed method outperforms several representative state-of-the-art algorithms.
|
|
15:30-15:45, Paper WeS53.5 | Add to My Program |
Joint Bi-Modal Image Reconstruction of DOT and XCT with an Extended Mumford-Shah Functional |
He, Di | Beijing Information Science & Technology University |
Jiang, Ming | School of Mathematics School of Mathematics, Peking |
Louis, Alfred K. | Institute of Applied Mathematics, Saarland University |
Maass, Peter | Center for Industrial Mathematics, University of Bremen |
Page, Thomas | Daimler Trucks |
Keywords: Multi-modality fusion, Image reconstruction - analytical & iterative methods
Abstract: Feature similarity measures are indispensable in multi-modality medical imaging, which enable joint multi-modal image reconstruction (JmmIR) by communication of feature information among different modalities. In this work, we establish an image similarity measure in terms of image edges from Tversky's theory of feature similarity in psychology. This image similarity measure will not force the nonexistent structures to be reconstructed when applied to joint bi-modal image reconstruction (JbmIR). It is applied to the JbmIR of diffuse optical tomography (DOT) and x-ray computed tomography (XCT). The performance is evaluated by two numerical phantoms. It is found that the proposed method improves the reconstructed image quality by more than 10% compared to single modality image reconstruction (SmIR) in terms of SSIM.
|
|
WeS54 Oral Session, Venetian Ballroom DE |
Add to My Program |
Connectivity Analysis - DWI/DTI |
|
|
Chair: Dubois, Jessica | INSERM-CEA I2BM Neurospin |
Co-Chair: Guevara, Pamela | Universidad De Concepción |
|
14:30-14:45, Paper WeS54.1 | Add to My Program |
Asymmetric Fiber Trajectory Distribution Estimated Using Streamline Differential Equation |
He, Jianzhong | Zhejiang University of Technology |
Feng, Yuanjing | Zhejiang University of Technology |
Li, Mao | Zhejiang University of Technology |
Keywords: Tractography, Brain, Diffusion weighted imaging
Abstract: Fiber orientation distribution (FOD) estimation with Diffusion magnetic resonance imaging (dMRI) is critical in white matter fiber tractography which is most commonly implemented by tracking the principal direction of the diffusion fiber orientation distribution step by step. Ambiguous spatial correspondences between estimated diffusion directions and fiber geometry, such as crossing, fanning or bending, makes tractography challenging. As a consequence, a lot of tracts suggest intertangled connections in unexpected regions of the white matter or actually stop prematurely in the white matter. In this work, we propose a novel fiber distribution function (FDF) defined on neighboring voxels based on the streamline differential equation from fluid kinematics. At a local level, the FDF is a series of curve flows which minimizes the energy function characterizing the relations between fibers and the joint fiber fragments within the same fiber bundle. Experiments were performed on phantom and in vivo brain dMRI data for qualitative and quantitative evaluation. The results demonstrate that our approach can reveal continuous fiber geometry distribution details which is potential for robust tractography.
|
|
14:45-15:00, Paper WeS54.2 | Add to My Program |
Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation |
Afzali, Maryam | Cardiff University Brain Research Imaging Center |
Tax, Chantal | Cardiff University Brain Research Imaging Center |
Chatziantoniou, Cyrano | Cardiff University Brain Research Imaging Center |
Jones, Derek | Cardiff University Brain Research Imaging Center |
Keywords: Diffusion weighted imaging, Brain
Abstract: Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical and linear-planar have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical, linear-planar, planar-spherical and linear-planar-spherical have been compared. The results show that the combination of three tensor encodings, linear-planar-spherical leads to lower bias and higher precision in the parameter estimation.
|
|
15:00-15:15, Paper WeS54.3 | Add to My Program |
CoBundleMAP: Consistent 2D Parameterization of Fiber Bundles across Subjects and Hemispheres |
Khatami, Mohammad | University of Bonn |
Wehler, Regina | University of Bonn |
Schultz, Thomas | University of Bonn |
Keywords: Diffusion weighted imaging, Brain, Classification
Abstract: We present CoBundleMAP, a manifold learning based method for jointly parameterizing streamlines from diffusion MRI tractography. CoBundleMAP significantly improves the previously proposed BundleMAP approach by establishing anatomical correspondences not only between different subjects, but also between the left and right hemispheres, by introducing a two-dimensional parameterization, by focusing analysis on a reliable core part of the bundle, and via a novel mechanism for feature extraction. We use CoBundleMAP to analyze hemispheric asymmetries, and demonstrate that it improves accuracy in a gender classification task.
|
|
15:15-15:30, Paper WeS54.4 | Add to My Program |
Cortical Surface Parcellation Based on Graph Representation of Short Fiber Bundle Connections |
Silva, Felipe | Universidad De Concepción |
Guevara, Miguel | University of Concepcion |
Poupon, Cyril | CEA I2BM NeuroSpin |
Mangin, Jean-François | CEA I2BM NeuroSpin |
Hernández, Cecilia | Universidad De Concepción |
Guevara, Pamela | Universidad De Concepción |
Keywords: Diffusion weighted imaging, Tractography, Connectivity analysis
Abstract: We propose a new automatic algorithm for the tractography-based parcellation of the cortical surface. The method is based on segmented bundles of the superficial white matter (SWM), calculated for each subject, using a multi-subject SWM bundle atlas. The scheme uses a fast intersection algorithm to define cortical regions connected by each bundle and then it defines a graph representation to model the overlapping of the regions to derive the final parcellation. The algorithm was tested on one subject, and resulting parameters were applied to other 4 subjects. Results show a good correspondence between subjects.
|
|
15:30-15:45, Paper WeS54.5 | Add to My Program |
Detecting State Changes in Community Structure of Functional Brain Networks Using a Markov-Switching Stochastic Block Model |
Samdin, S. Balqis | King Abdullah University of Sciences and Technology |
Ting, Chee-Ming | Universiti Teknologi Malaysia |
Ombao, Hernando | King Abdullah University of Science and Technology |
Keywords: Connectivity analysis, Probabilistic and statistical models & methods, Functional imaging (e.g. fMRI)
Abstract: Functional brain networks exhibit modular community structure with highly inter-connected nodes within a same module, but sparsely connected between different modules. Recent neuroimaging studies also suggest dynamic changes in brain connectivity over time. We propose a dynamic stochastic block model (SBM) to characterize changes in community structure of the brain networks inferred from neuroimaging data. We develop a Markov-switching SBM (MS-SBM) which is a non-stationary extension combining time-varying SBMs with a Markov process to allow for state-driven evolution of the network community structure. The time-varying connectivity parameters within and between communities are estimated from dynamic networks based on sliding-window approach, assuming a constant community membership of nodes recovered by using spectral clustering. We then partition the time-evolving community structure into recurring, piecewise constant regimes or states using a hidden Markov model. Simulation shows that the proposed MS-SBM gives accurate tracking of dynamic community regimes. Application to a task-evoked fMRI data reveals dynamic reconfiguration of the brain network modular structure in language processing between alternating blocks of story and math tasks.
|
|
WeS61 Oral Session, Venetian Ballroom A |
Add to My Program |
Shape Modeling and Analysis |
|
|
Chair: Gerig, Guido | NYU Tandon School of Engineering |
Co-Chair: Syeda-Mahmood, Tanveer | IBM Almaden Research Center |
|
16:30-16:45, Paper WeS61.1 | Add to My Program |
Acceleration Controlled Diffeomorphisms for Nonparametric Image Regression |
Fishbaugh, James | NYU Tandon School of Engineering |
Gerig, Guido | NYU Tandon School of Engineering |
Keywords: Modeling - Anatomical, physiological and pathological, Shape analysis, Image registration
Abstract: The analysis of medical image time-series is becoming increasingly important as longitudinal imaging studies are maturing and large scale open imaging databases are becoming available. Image regression is widely used for several purposes: as a statistical representation for hypothesis testing, to bring clinical scores and images not acquired at the same time into temporal correspondence, or as a consistency filter to enforce temporal correlation. Geodesic image regression is the most prominent method, but the geodesic constraint limits the flexibility and therefore the application of the model, particularly when the observation time window is large or the anatomical changes are non-monotonic. In this paper, we propose to parameterize diffeomorphic flow by acceleration rather than velocity, as in the geodesic model. This results in a nonparametric image regression model which is completely flexible to capture complex change trajectories, while still constrained to be diffeomorphic and with a guarantee of temporal smoothness. We demonstrate the application of our model on synthetic 2D images as well as real 3D images of the cardiac cycle.
|
|
16:45-17:00, Paper WeS61.2 | Add to My Program |
GEMS - Geometric Median Shapes |
Cunha, Alexandre | California Institute of Technology |
Keywords: Shape analysis, Optimization method, Probabilistic and statistical models & methods
Abstract: We present an algorithm to compute the geometric median of shapes which is based on the extension of median to high dimensions. The median finding problem is formulated as an optimization over distances and it is solved directly using the watershed method as an optimizer. We show that the geometric median shape faithfully represents the true central tendency of the data, contaminated or not. It is superior to the mean shape which can be negatively affected by the presence of outliers. Our approach can be applied to manifold and non manifold shapes, with single or multiple connected components. The application of distance transform and watershed algorithm, two well established constructs of image processing, lead to an algorithm that can be quickly implemented to generate fast solutions with linear storage requirement. We demonstrate our methods in synthetic and natural shapes and compare median and mean results under increasing outlier contamination.
|
|
17:00-17:15, Paper WeS61.3 | Add to My Program |
Sensitivity Analysis of an in Silico Model of Tumor Growth and Radiation Response |
Sosa Marrero, Carlos | Université De Rennes 1 |
Acosta, Oscar | Univ. of Rennes 1 |
Castro, Miguel | Université De Rennes 1 |
Hernández, Alfredo I | Univ. of Rennes 1 and INSERM U1099 |
Rioux-Leclercq, Nathalie | Department of Pathological Anatomy and Cytology, CHU Pontchaillo |
Paris, François | Université De Nantes |
De Crevoisier, Renaud | INSERM, U1099, Rennes, F-35000, France - Université De Rennes 1, |
Keywords: Modeling - Anatomical, physiological and pathological, Radiation therapy, planing and treatment, Prostate
Abstract: Simulating the response to radiotherapy (RT) in cancer patients may help devising new therapeutic strategies. Computational models make it possible to cope with the multi-scale biological mechanisms characterising this group of diseases. We present in this paper an in silico model of tumour growth and radiation response, capable of simulating a whole RT protocol of prostate cancer. Oxygen diffusion, proliferation of tumour cells, angiogenesis based on the VEGF diffusion, oxygen-dependent response to irradiation and resorption of dead cells were implemented in a multi-scale framework. A sensitivity analysis using the Morris screening method was performed on 21 computational tissues, initialised from prostate histopathological specimens presenting different tumour and vascular densities. The dose per fraction and the duration of the cycle of tumour cells were identified as the most important parameters of the model.
|
|
17:15-17:30, Paper WeS61.4 | Add to My Program |
Hierarchical Representation for CT Prostate Segmentation |
Wang, Shuai | University of North Carolina at Chapel Hill |
He, Kelei | State Key Laboratory for Novel Software Technology, Nanjing Univ |
Nie, Dong | Unc |
Zhou, Sihang | University of North Carolina at Chapel Hill |
Gao, Yaozong | The University of North Carolina at Chapel Hill |
Shen, Dinggang | UNC-Chapel Hill |
Keywords: Prostate, Image segmentation, Computed tomography (CT)
Abstract: Traditional approaches for automatic CT prostate segmentation often guide feature representation learning directly based on manual delineation to deal with this challenging task (due to unclear boundaries and large shape variations), which does not fully exploit the prior information and leads to insufficient discriminability. In this paper, we propose a novel hierarchical representation learning method to segment the prostate in CT images. Specifically, one multi-task model under the supervision of a series of morphological masks transformed from the manual delineation aims to generate hierarchical feature representations for the prostate. Then, leveraging both these generated rich representations and intensity images, one fully convolutional network (FCN) carries out the accurate segmentation of the prostate. To evaluate the performance, a large and challenging CT dataset is adopted, and the experimental results show our method achieves significant improvement compared with conventional FCNs.
|
|
17:30-17:45, Paper WeS61.5 | Add to My Program |
Multi-Modal Fusion Learning for Cervical Dysplasia Diagnosis |
Chen, Tingting | Zhejiang University |
Ma, Xinjun | Zhejiang University |
Ying, Xingde | Zhejiang University |
Wang, Wenzhe | Zhejiang University |
Yuan, Chunnv | Zhejiang University |
Lu, Weiguo | Zhejiang University |
Chen, Danny Z. | University of Notre Dame |
Wu, Jian | Zhejiang University |
Keywords: Multi-modality fusion, Cervix, Endoscopy
Abstract: Fusion of multi-modal information from a patient's screening tests can help improve the diagnostic accuracy of cervical dysplasia. In this paper, we present a novel multi-modal deep learning fusion network, called MultiFuseNet, for cervical dysplasia diagnosis, utilizing multi-modal data from cervical screening results. To exploit the relations among different image modalities, we propose an Attention Mutual-Enhance (AME) module to fuse features of each modality at the feature extraction stage. Specifically, we first develop the Fused Faster R-CNN with AME modules for automatic cervix region detection and fused image feature learning, and then incorporate non-image information into the learning model to jointly learn non-linear correlations among all the modalities. To effectively train the Fused Faster R-CNN, we employ an alternating training scheme. Experimental results show the effectiveness of our method, which achieves an average accuracy of 87.4% (88.6% sensitivity and 86.1% specificity) on a large dataset, outperforming the methods using any single modality alone and the known multi-modal methods.
|
|
WeS62 Oral Session, Venetian Ballroom B |
Add to My Program |
Segmentation and Tracking in Microscopy |
|
|
Chair: Kouamé, Denis | Université De Toulouse III, IRIT UMR CNRS 5505 |
Co-Chair: Achim, Alin | University of Bristol |
|
16:30-16:45, Paper WeS62.1 | Add to My Program |
Segmentation and Modelling of Hela Nuclear Envelope |
Karabag, Cefa | School of Mathematics, Computer Science and Engineering, City, U |
Jones, Martin L. | Electron Microscopy Science Technology Platform, the Francis Cri |
Peddie, Christopher J. | Electron Microscopy Science Technology Platform, the Francis Cri |
Weston, Anne E. | Electron Microscopy Science Technology Platform, the Francis Cri |
Collinson, Lucy M. | Electron Microscopy Science Technology Platform, the Francis Cri |
Reyes-Aldasoro, Constantino Carlos | City University London |
Keywords: Image segmentation, Microscopy - Electron, Cells & molecules
Abstract: This paper describes an algorithm to segment the 3D nuclear envelope of HeLa cancer cells from electron microscopy images and model the volumetric shape of the nuclear envelope against an ellipsoid. The algorithm was trained on a single cell and then tested in six separate cells. To assess the algorithm, Jaccard similarity index and Hausdorff distance against a manually-delineated gold standard were calculated on two cells. The mean Jaccard value and Hausdorff distance that the segmentation achieved for central slices were 97% and 98 pixels for the first cell and 93% and 14 pixels for the second cell and outperformed segmentation with active contours. The modelling projects a 3D to a 2D surface that summarises the complexity of the shape in an intuitive result. Measurements extracted from the modelled surface may be useful to correlate shape with biological characteristics. The algorithm is unsupervised, fully automatic, fast and processes one image in less than 10 seconds. Code and data are freely available at https://github.com/reyesaldasoro/Hela-Cell-Segmentation and http://dx.doi.org/10.6019/EMPIAR-10094.
|
|
16:45-17:00, Paper WeS62.2 | Add to My Program |
Automated Segmentation of Cervical Nuclei in Pap Smear Images Using Deformable Multi-Path Ensemble Model |
Zhao, Jie | Peking University |
Li, Quanzheng | Harvard Medical School, Massachusetts General Hospital |
Li, Hongfeng | Peking University |
Zhang, Li | Peking University |
Keywords: Image segmentation, Cervix, Microscopy - Light, Confocal, Fluorescence
Abstract: Pap smear testing has been widely used for detecting cervical cancers based on the morphology properties of cell nuclei in microscopic image. An accurate nuclei segmentation could thus improve the success rate of cervical cancer screening. In this work, a method of automated cervical nuclei segmentation using Deformable Multipath Ensemble Model (D-MEM) is proposed. The approach adopts a U-shaped convolutional network as a backbone network, in which dense blocks are used to transfer feature information more effectively. To increase the flexibility of the model, we then use deformable convolution to deal with different nuclei irregular shapes and sizes. To reduce the predictive bias, we further construct multiple networks with different settings, which form an ensemble model. The proposed segmentation framework has achieved state-of-the-art accuracy on Herlev dataset with Zijdenbos similarity index (ZSI) of 0.933pm0.14, and has the potential to be extended for solving other medical image segmentation tasks.
|
|
17:00-17:15, Paper WeS62.3 | Add to My Program |
Domain Adaptive Segmentation in Volume Electron Microscopy Imaging |
Roels, Joris | Ghent University |
Hennies, Julian | European Molecular Biology Laboratory |
Saeys, Yvan | VIB - Ghent University |
Philips, Wilfried | Gent University |
Kreshuk, Anna | European Molecular Biology Laboratory (EMBL) |
Keywords: Microscopy - Electron, Image segmentation, Machine learning
Abstract: In the last years, automated segmentation has become a necessary tool for volume electron microscopy (EM) imaging. So far, the best performing techniques have been largely based on fully supervised encoder-decoder CNNs, requiring a substantial amount of annotated images. Domain Adaptation (DA) aims to alleviate the annotation burden by 'adapting' the networks trained on existing groundtruth data (source domain) to work on a different (target) domain with as little additional annotation as possible. Most DA research is focused on the classification task, whereas volume EM segmentation remains rather unexplored. In this work, we extend recently proposed classification DA techniques to an encoder-decoder layout and propose a novel method that adds a reconstruction decoder to the classical encoder-decoder segmentation in order to align source and target encoder features. The method has been validated on the task of segmenting mitochondria in EM volumes. We have performed DA from brain EM images to HeLa cells and from isotropic FIB/SEM volumes to anisotropic TEM volumes. In all cases, the proposed method has outperformed the extended classification DA techniques and the finetuning baseline. An implementation of our work can be found on https://github.com/JorisRoels/domain-adaptive-segmentation.
|
|
17:15-17:30, Paper WeS62.4 | Add to My Program |
Facilitating Data Association in Particle Tracking Using Autoencoding and Score Matching |
Smal, Ihor | Erasmus MC - University Medical Center Rotterdam |
Yao, Yao | Erasmus University Medical Center |
Galjart, Niels | Erasmus MC - University Medical Center Rotterdam |
Meijering, Erik | Erasmus University Medical Center |
Keywords: Microscopy - Light, Confocal, Fluorescence, Single cell & molecule detection, Machine learning
Abstract: A crucial aspect of automated particle tracking in time-lapse fluorescence microscopy images is the linking or association of detected objects between frames. Recent evaluation studies have shown that the best results are achieved by making use of accurate motion models of the underlying particle dynamics. However, existing approaches often employ rather simple motion models which may be inappropriate for a given application, and even if complex models are used they all require careful user-parameter tuning. To alleviate these problems we propose a novel method based on autoencoding and score matching which can learn the dynamics from the data. Results on both synthetic and real data show the method performs comparable to state-of-the-art linking methods.
|
|
17:30-17:45, Paper WeS62.5 | Add to My Program |
Epithelial Segmentation from in Situ Hybridisation Histological Samples Using a Deep Central Attention Learning Approach |
Song, Tzu-Hsi | University of Birmingham |
Landini, Gabriel | University of Birmingham |
Fouad, Shereen | Department of Computer Science, Birmingham City University, UK, |
Mehanna, Hisham | InHANSE, Institute of Cancer and Genomic Sciences, University Of |
Keywords: Image segmentation, Machine learning, Histopathology imaging (e.g. whole slide imaging)
Abstract: The assessment of pathological samples by molecular techniques, such as in situ hybridization (ISH) and immunohistochemistry (IHC), has revolutionised modern Histopathology. Most often it is important to detect ISH/IHC reaction products in certain cells or tissue types. For instance, detection of human papilloma virus (HPV) in oropharyngeal cancer samples by ISH products is difficult and remains a tedious and time consuming task for experts. Here we introduce a proposed framework to segment epithelial regions in oropharyngeal tissue images with ISH staining. First, we use colour deconvolution to obtain a counterstain channel and generate input patches based on superpixels and their neighbouring areas. Then, a novel deep attention residual network is applied to identify the epithelial regions to produce an epithelium segmentation mask. In the experimental results, comparing the proposed network with other state-of-the-art deep learning approaches, our network provides a better performance than region-based and pixel-based segmentations.
|
|
WeS63 Oral Session, Venetian Ballroom C |
Add to My Program |
MR Imaging and Reconstruction |
|
|
Chair: Jacob, Mathews | University of Iowa |
Co-Chair: Ciuciu, Philippe | CEA |
|
16:30-16:45, Paper WeS63.1 | Add to My Program |
Calibrationless Oscar-Based Image Reconstruction in Compressed Sensing Parallel Mri |
El Gueddari, Loubna | CEA/NeuroSpin & INRIA-CEA Parietal Team |
Ciuciu, Philippe | CEA |
Chouzenoux, Emilie | Ligm - Cnrs |
Vignaud, Alexandre | CEA/NeuroSpin |
Pesquet, Jean-Christophe | CentraleSupélec, INRIA Saclay, University Paris Saclay |
Keywords: Compressive sensing & sampling, Magnetic resonance imaging (MRI), Image reconstruction - analytical & iterative methods
Abstract: Reducing acquisition time is a crucial issue in MRI especially in the high resolution context. Compressed sensing has faced this problem for a decade. However, to maintain a high signal-to-noise ratio (SNR), CS must be combined with parallel imaging. This leads to harder reconstruction problems that usually require the knowledge of coil sensitivity profiles. In this work, we introduce a calibrationless image reconstruction approach that no longer requires this knowledge. The originality of this work lies in using for reconstruction a group sparsity structure (called OSCAR) across channels that handles SNR inhomogeneities across receivers. We compare this reconstruction with other calibrationless approaches based on group-LASSO and its sparse variation as well as with the auto-calibrated method called l1-ESPIRiT. We demonstrate that OSCAR outperforms its competitors and provides similar results to l1-ESPIRiT. This suggests that the sensitivity maps are no longer required to perform combined CS and parallel imaging reconstruction.
|
|
16:45-17:00, Paper WeS63.2 | Add to My Program |
Magnetic Resonance Fingerprinting Using Recurrent Neural Networks |
Oksuz, Ilkay | King's College London |
Cruz, Gastao Jose Lima | King's College London |
Clough, James | Kings College London |
Bustin, Aurelien | King's College London |
Fuin, Niccolo | King's College London |
Botnar, Rene | King's College London |
Prieto, Claudia | King's College London |
King, Andrew Peter | King's College London |
Schnabel, Julia | King's College London |
Keywords: Magnetic resonance imaging (MRI), Inverse methods
Abstract: Magnetic Resonance Fingerprinting (MRF) is a new approach to quantitative magnetic resonance imaging that allows simultaneous measurement of multiple tissue properties in a single, time-efficient acquisition. Standard MRF reconstructs parametric maps using dictionary matching and requires high computational time. We propose to perform MRF map reconstruction using a recurrent neural network, which exploits the time-dependent information of the MRF signal evolution. We evaluate our method on multiparametric synthetic signals and compare it to existing MRF map reconstruction approaches, including those based on neural networks. Our method achieves state-of-the-art estimates of T1 and T2 values. In addition, the reconstruction time is reduced compared to dictionary-matching based approach.
|
|
17:00-17:15, Paper WeS63.3 | Add to My Program |
Multi-Shot Sensitivity-Encoded Diffusion MRI Using Model-Based Deep Learning (MoDL-MUSSELS) |
Aggarwal, Hemant Kumar | University of Iowa |
Mani, Merry | University of Iowa |
Jacob, Mathews | University of Iowa |
Keywords: Machine learning, Image reconstruction - analytical & iterative methods, Diffusion weighted imaging
Abstract: We propose a model-based deep learning architecture for the correction of phase errors in multishot diffusion-weighted echo-planar MRI images. This work is a generalization of MUSSELS, which is a structured low-rank algorithm. We show that an iterative reweighted least-squares implementation of MUSSELS resembles the model-based deep learning (MoDL) framework. We propose to replace the self-learned linear filter bank in MUSSELS with a convolutional neural network, whose parameters are learned from exemplary data. The proposed algorithm reduces the computational complexity of MUSSELS by several orders of magnitude, while providing comparable image quality.
|
|
17:15-17:30, Paper WeS63.4 | Add to My Program |
Structurally-Informed Deconvolution of Functional Magnetic Resonance Imaging Data |
Bolton, Thomas | EPFL |
Farouj, Younes | EPFL |
Inan, Mert | Department of Biological Sciences, Mellon College of Science And |
Van De Ville, Dimitri | EPFL & UniGE |
Keywords: Graphical models & methods, Functional imaging (e.g. fMRI), Deconvolution
Abstract: Neural activity occurs in the shape of spatially organized patterns: networks of brain regions activate in synchrony. Many of these functional networks also happen to be strongly structurally connected. We use this information to revisit the fundamental problem of functional magnetic resonance imaging (fMRI) data deconvolution. Using tools from graph signal processing (GSP), we extend total activation, a spatiotemporal deconvolution technique, to data defined on graph domains. The resulting approach simultaneously cancels out the effect of the haemodynamics, and promotes spatial patterns that are in harmony with predefined structural wirings. More precisely, we minimize a functional involving one data fidelity and two regularization terms. The first regularizer uses the concept of generalized total variation to promote sparsity in the activity transients domain. The second term controls the overall spatial variation over the graph structure. We demonstrate the relevance of this structurally-informed regularization on synthetic and experimental data.
|
|
17:30-17:45, Paper WeS63.5 | Add to My Program |
Retrospective Correction of Rigid and Non-Rigid Mr Motion Artifacts Using GANs |
Armanious, Karim | University of Stuttgart |
Gatidis, Sergios | University of Tübingen |
Nikolaou, Konstantin | Ludwig-Maximilians-University Hospital Munich |
Yang, Bin | Institute of Signal Processing and System Theory, University Of |
Küstner, Thomas | University of Stuttgart, Germany |
Keywords: Motion compensation and analysis, Machine learning, Image enhancement/restoration(noise and artifact reduction)
Abstract: Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a new adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a new generator architecture to capture the textures and fine-detailed structures of the desired artifact-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model performance.
|
|
WeS64 Oral Session, Venetian Ballroom DE |
Add to My Program |
Brain Structure Learning |
|
|
Chair: Coulon, Olivier | Aix-Marseille University |
Co-Chair: Barillot, Christian | Irisa (umr Cnrs 6074), Inria, Inserm |
|
16:30-16:45, Paper WeS64.1 | Add to My Program |
Sparse Low-Rank Constrained Adaptive Structure Learning Using Multi-Template for Autism Spectrum Disorder Diagnosis |
Huang, Fanglin | Shenzhen University |
Elazab, Ahmed | Shenzhen University |
Ou-Yang, Le | Shenzhen University |
Wang, Tianfu | Shenzhen University |
Lei, Baiying | Shenzhen University |
Keywords: Machine learning, Classification, Functional imaging (e.g. fMRI)
Abstract: Autism spectrum disorder (ASD) is a developmental disability that causes significant social, communication and behavioral challenges. As early intervention services help children from birth to 3 years old learn important skills, it is crucial to detect and manage this disorder early and effectively. In this paper, we propose a novel sparse low-rank network based supervised feature selection method for ASD diagnosis, which conducts feature selection and learning adaptive local structure simultaneously using multi-template data. Specifically, we encode the modularity prior while constructing a functional connectivity (FC) brain network. After extracting different sets of features from the FC network, feature selection is applied in each template. In addition, we employ a linear regression model to identify relationship or similarity of features and disease status. Meanwhile, the local structure is learned via an adaptive process. Extensive experiments are conducted to demonstrate the effectiveness of our proposed method on the Autism Brain Imaging Data Exchange (ABIDE) dataset. Experimental results demonstrate our proposed method can enhance the ASD diagnosis performance and outperforms the commonly used and state-of-the-art methods.
|
|
16:45-17:00, Paper WeS64.2 | Add to My Program |
A Convolutional Autoencoder Approach to Learn Volumetric Shape Representations for Brain Structures |
Yu, Evan | Cornell University |
Sabuncu, Mert | Cornell University |
Keywords: Brain, Machine learning, Shape analysis
Abstract: We propose a novel machine learning strategy for studying neuroanatomical shape variation. Our model works with volumetric binary segmentation images, and requires no pre-processing such as the extraction of surface points or a mesh. The learned shape descriptor is invariant to affine transformations, including shifts, rotations and scaling. Thanks to the adopted autoencoder framework, inter-subject differences are automatically enhanced in the learned representation, while intra-subject variances are minimized. Our experimental results on a shape retrieval task showed that the proposed representation outperforms a state-of-the-art benchmark for brain structures extracted from MRI scans.
|
|
17:00-17:15, Paper WeS64.3 | Add to My Program |
Soft Labeling by Distilling Anatomical Knowledge for Improved MS Lesion Segmentation |
Kats, Eytan | Tel Aviv University |
Goldberger, Jacob | Bar-Ilan University |
Greenspan, Hayit K. | Tel Aviv University |
Keywords: Magnetic resonance imaging (MRI), Brain, Image segmentation
Abstract: This paper explores the use of a soft ground-truth mask ("soft mask") to train a Fully Convolutional Neural Network (FCNN) for segmentation of Multiple Sclerosis (MS) lesions. Detection and segmentation of MS lesions is a complex task largely due to the extreme unbalanced data, with very small number of lesion pixels that can be used for training. Utilizing the anatomical knowledge that the lesion surrounding pixels may also include some lesion level information, we suggest to increase the data set of the lesion class with neighboring pixel data - with a reduced confidence weight. A soft mask is constructed by morphological dilation of the binary segmentation mask provided by a given expert, where expert-marked voxels receive label 1 and voxels of the dilated region are assigned a soft label. In the methodology proposed, the FCNN is trained using the soft mask. On the ISBI 2015 challenge dataset, this is shown to provide a better precision-recall tradeoff and to achieve a higher average Dice similarity coefficient. We also show that by using this soft mask scheme we can improve the network segmentation performance when compared to a second independent expert.
|
|
17:15-17:30, Paper WeS64.4 | Add to My Program |
Improved ICH Classification Using Task-Dependent Learning |
Bar, Amir | Zebra Medical Vision |
Mauda, Michal | Tel-Aviv Medical Center |
Turner, Yoni | Shaare Zedek Medical Center |
Sfady, Michal | Zebra Medical Vision |
Elnekave, Eldad | Zebra Medical Vision |
|
|
17:30-17:45, Paper WeS64.5 | Add to My Program |
Quantitative MRI Characterization of Brain Abnormalities in 'de Novo' Parkinsonian Patients |
Munoz Ramirez, Veronica | Université Grenoble-Alpes |
Forbes, Florence | INRIA Jean Kuntzman Laboratory , Grenoble University |
Arbel, Julyan | INRIA Jean Kuntzman Laboratory , Grenoble University |
Arnaud, Alexis | INRIA University of Grenoble |
Dojat, Michel | INSERM U1216 |
Keywords: Probabilistic and statistical models & methods, Magnetic resonance imaging (MRI), Brain
Abstract: Currently there is an important delay between the onset of Parkinson's disease and its diagnosis. The detection of changes in physical properties of brain structures may help to detect the disease earlier. In this work, we propose to take advantage of the informative features provided by quantitative MRI to construct statistical models representing healthy brain tissues. This allows us to detect atypical values for these features in the brain of Parkinsonian patients. We introduce mixture models to capture the non-standard shape of the data multivariate distribution. Promising preliminary results demonstrate the potential of our approach in discriminating patients from controls and revealing the subcortical structures the most impacted by the disease.
|
|
|