高级检索
当前位置: 首页 > 详情页

Enhancing Automated COVID-19 Chest X-ray Diagnosis by Image-to-Image GAN Translation

文献详情

资源类型:
WOS体系:

收录情况: ◇ CPCI(ISTP)

机构: [1]Dept.of Electrical Engineering & Computer Science,York University ,Toronto, Canada [2]School of Information Technology,York University,Toronto, Canada [3]Guangdong Provincal Hospital of Chinese Medicine,Guangzhou Univ Chinese Med,Guangzhou, China [4]Dapasoft INC.,Toronto, Canada
出处:
ISSN:

关键词: COVID-19 deep learning GAN generative adversarial network image classification

摘要:
The severe pneumonia induced by the infection of the SARS-CoV-2 virus causes massive death in the ongoing COVID-19 pandemic. The early detection of the SARS-CoV-2 induced pneumonia relies on the unique patterns of the chest XRay images. Deep learning is a data-greedy algorithm to achieve high performance when adequately trained. A common challenge for machine learning in the medical domain is the accessibility to properly annotated data. In this study, we apply a conditional adversarial network (cGAN) to perform image to image (Pix2Pix) translation from the non-COVID-19 chest X-Ray domain to the COVID-19 chest X-Ray domain. The objective is to learn a mapping from the normal chest X-Ray visual patterns to the COVID-19 pneumonia chest X-ray patterns. The original dataset has a typical imbalanced issue because it contains only 219 COVID-19 positive images but has 1,341 images for normal chest X-Ray and 1,345 images for viral pneumonia. A U-Net based architecture is applied for the image-to-image translation to generate synthesized COVID-19 X-Ray chest images from the normal chest X-ray images. A 50-convolutional-layer residual net (ResNet) architecture is applied for the final classification task. After training the GAN model for 100 epochs, we use the GAN generator to translate 1,100 COVID-19 images from the normal X-Ray to form a balanced training dataset (3,762 images) for the classification task. The ResNet based classifier trained by the enhanced dataset achieves the classification accuracy of 97.8% compared to 96.1% in the transfer learning mode. When trained with the original imbalanced dataset, the model achieves an accuracy of 96.1% compared to 95.6% in the training from trainby-scratch model. In addition, the classifier trained by the enhanced dataset has more stable measures in precision, recall, and F1 scores across different image classes. We conclude that the GAN-based data enhancement strategy is applicable to most medical image pattern recognition tasks, and it provides an effective way to solve the common expertise dependence issue in the medical domain. © 2020 IEEE.

语种:
WOS:
第一作者:
第一作者机构: [1]Dept.of Electrical Engineering & Computer Science,York University ,Toronto, Canada
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:2018 今日访问量:0 总访问量:645 更新日期:2024-07-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 广东省中医院 技术支持:重庆聚合科技有限公司 地址:广州市越秀区大德路111号