登录    注册    忘记密码

详细信息

Classification of Breast Cancer Histopathological Images Using Discriminative Patches Screened by Generative Adversarial Networks  ( SCI-EXPANDED收录 EI收录)  

文献类型:期刊文献

英文题名:Classification of Breast Cancer Histopathological Images Using Discriminative Patches Screened by Generative Adversarial Networks

作者:Man, Rui[1];Yang, Ping[1];Xu, Bowen[2]

第一作者:Man, Rui

通讯作者:Yang, P[1]

机构:[1]Beijing Union Univ, Smart City Coll, Beijing 100101, Peoples R China;[2]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China

第一机构:北京联合大学继续教育学院

通讯机构:[1]corresponding author), Beijing Union Univ, Smart City Coll, Beijing 100101, Peoples R China.|[1141733]北京联合大学继续教育学院;[11417]北京联合大学;

年份:2020

卷号:8

起止页码:155362-155377

外文期刊名:IEEE ACCESS

收录:;EI(收录号:20203809200331);Scopus(收录号:2-s2.0-85090933337);WOS:【SCI-EXPANDED(收录号:WOS:000566116100001)】;

语种:英文

外文关键词:Breast cancer histopathological images; densely connected convolutional networks; discriminative patches; generative adversarial networks; image classification

摘要:Computer-aided diagnosis (CAD) systems of breast cancer histopathological images automated classification can help reduce the manual observation workload of pathologists. In the classification of breast cancer histopathology images, due to the small number and high-resolution of the training samples, the patch-based image classification methods have become very necessary. However, adopting a patches-based classification method is very challenging, since the patch-level datasets extracted from whole slide images (WSIs) contain many mislabeled patches. Existing patch-based classification methods have paid little attention to addressing the mislabeled patches for improving the performance of classification. To solve this problem, we propose a novel approach, named DenseNet121-AnoGAN, for classifying breast histopathological images into benign and malignant classes. The proposed approach consists of two major parts: using an unsupervised anomaly detection with generative adversarial networks (AnoGAN) to screen mislabeled patches and using densely connected convolutional network (DenseNet) to extract multi-layered features of the discriminative patches. The performance of the proposed approach is evaluated on the publicly available BreaKHis dataset using 5-fold cross validation. The proposed DenseNet121-AnoGAN can be better suited to coarse-grained high-resolution images and achieved satisfactory classification performance in 40X and 100X images. The best accuracy of 99.13% and the best F1(score) of 99.38% have been obtained at the image level for the 40X magnification factor. We have also investigated the performance of AnoGAN on the other classification networks, including AlexNet, VGG16, VGG19, and ResNet50. Our experiments show that whether it is at the patient-level accuracy or at the image-level accuracy, the classification networks with AnoGAN have provided better performance than the classification networks without AnoGAN.

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心