查看原文
其他

香港中文大学研发人工智能技术快速诊断肺癌和乳腺癌

2017-09-08 香港中文大学 SIBCS


  2017年9月6日,香港中文大学宣布两项人工智能系统辅助研究:自动筛查早期肺癌、快速检测乳腺癌转移,能够利用人工智能影像识别技术判读肺癌、乳腺癌的医学影像,准确率分别达 91%、99%,识别过程最快仅需30秒、5~10分钟,该技术可大幅提升临床诊断效率,并降低误诊率。


  对于乳腺癌的检测,医生一般会通过乳腺钼靶或磁共振成像扫描,检测硬块位置。在检测淋巴结转移时,医生会切取一小块活组织为标本,在显微镜下检查淋巴结是否转移,以及肿瘤是良性还是恶性,而一幅数码活组织全切片图像的解像度非常高,档案大小可达1GB,相当于一部90分钟高清电影的储存容量,令检测过程复杂费时。该研究开发了一种新型深层卷积神经网络,分阶段处理乳腺癌切片图像。首先,使用改良版全卷积网络模型,对图像进行较粗略但保持高灵敏度的快速预测,重构出更为精密准确的预测结果,然后定位并挑选出含有淋巴结转移的图像。对比资深病理医生人工检测结果,该项自动化检测与肉眼检测相比,准确度高出2%,达到99%,耗时仅需5~10分钟,而靠肉眼则需15~30分钟。


  早期肺癌多以肺小结节的形式出现,医生主要通过胸腔计算机断层扫描(CT)图像去检查是否存在肺小结节,而每次检查都会有多达数百张断层扫描图像,医生仅用肉眼进行判断,费时费力。该研究采用深度学习技术判读CT扫描图像,通过深层神经网络自动检测识别出可能出现肺小结节的位置,耗时30秒,准确率高达91%。


  智能深度学习系统是指计算机模仿人类大脑,将收集的数据,根据操作员或医生的指示和讲解,进行数据分析,从而按操作员的修改等动作再自行学习,改善程序内容,原理类似谷歌智能下棋程序(AlphaGo),会自行依数据思考及分析。智能深度学习可协助医生观看扫描图像,当发现可疑图像时,系统便会通知医生。深度学习技术可提升技术敏感度,剔除疑似及误报。香港中文大学多次就有关实验参与国际学术比赛,此次肺癌和乳腺癌的研发数据来自多个国家、共逾3500例患者,研发结果亦在国际学术比赛中名列前茅,获医学界正面回应。香港中文大学还将联合北京三家医院共同开发相关产品,以优化技术,尽早识别肺癌和乳腺癌,为早期诊断和治疗提供可靠依据。


Med Image Anal. 2017 Oct;41:40-54.


3D deeply supervised network for automated segmentation of volumetric medical images.


Dou Q, Yu L, Chen H, Jin Y, Yang X, Qin J, Heng PA.


Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China.


Highlights

  • 3D fully convolutional networks for efficient volume-to-volume learning and inference.

  • Per-voxel-wise error backpropagation which alleviates the risk of over-fitting on limited dataset.

  • A 3D deep supervision mechanism that simultaneously accelerates optimization and boosts model performance.

  • State-of-the-art performance on two typical yet challenging medical image segmentation tasks.


While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN.


KEYWORDS: 3D deeply supervised networks; 3D fully convolutional networks; Deep learning; Volumetric medical image segmentation


PMID: 28526212


DOI: 10.1016/j.media.2017.05.001




Med Image Anal. 2017 Dec;42:1-13.


Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge.


Setio AAA, Traverso A, de Bel T, Berens MSN, Bogaard CVD, Cerello P, Chen H, Dou Q, Fantacci ME, Geurts B, Gugten RV, Heng PA, Jansen B, de Kaste MMJ, Kotov V, Lin JY, Manders JTMC, Sónora-Mengana A, García-Naranjo JC, Papavasileiou E, Prokop M, Saletta M, Schaefer-Prokop CM, Scholten ET, Scholten L, Snoeren MM, Torres EL, Vandemeulebroucke J, Walasek N, Zuidhof GCA, Ginneken BV, Jacobs C.


Radboud University Medical Center, Nijmegen, The Netherlands; Polytechnic University of Turin, Turin, Italy; Turin Section of Istituto Nazionale di Fisica Nucleare, Turin, Italy; Radboud University, Nijmegen, The Netherlands; The Chinese University of Hong Kong, China; University of Pisa, Pisa, Italy; Pisa Section of Istituto Nazionale di Fisica Nucleare, Pisa, Italy; Vrije Universiteit Brussel, Brussels, Belgium; IMEC, Leuven, Belgium; Universidad de Oriente, Santiago de Cuba, Cuba; Center of Applied Technologies and Nuclear Development, La Habana, Cuba; Meander Medisch Centrum, Amersfoort, The Netherlands; Fraunhofer MEVIS, Bremen, Germany.


Highlights

  • A novel objective evaluation framework for nodule detection algorithms using the largest publicly available LIDC-IDRI data set.

  • The impact of combining individual systems on the detection performance was investigated.

  • The combination of classical candidate detectors and a combination of deep learning architectures generates excellent results, better than any individual system.

  • Our observer study has shown that CAD detects nodules that were missed by expert readers.

  • We released this set of additional nodules for further development of CAD systems.


Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems.


KEYWORDS: Computed tomography; Computer-aided detection; Convolutional networks; Deep learning; Medical image challenges; Pulmonary nodules


PMID: 28732268


DOI: 10.1016/j.media.2017.06.015




IEEE Trans Biomed Eng. 2017 Jul;64(7):1558-1567.


Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection.


Dou Q, Chen H, Yu L, Qin J, Heng PA.


Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China.


OBJECTIVE: False positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates.


METHODS: We propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules.


RESULTS: The proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track.


CONCLUSION: Experimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data.


SIGNIFICANCE: While our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics.


PMID: 28113302


DOI: 10.1109/TBME.2016.2613502

















您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存