Basit öğe kaydını göster

dc.contributor.authorWakili, Musa Adamu
dc.contributor.authorShehu, Harisu Abdullahi
dc.contributor.authorSharif, Md. Haidar
dc.contributor.authorSharif, Md. Haris Uddin
dc.contributor.authorUmar, Abubakar
dc.contributor.authorKusetoğulları, Hüseyin
dc.contributor.authorİnce, İbrahim Furkan
dc.contributor.authorUyaver, Şahin
dc.date.accessioned2023-02-23T08:43:24Z
dc.date.available2023-02-23T08:43:24Z
dc.date.issued2022en_US
dc.identifier.citationWakili, M. A., Shehu, H. A., Sharif, M., Sharif, M., Uddin, H., Umar, A., ... & Uyaver, Ş. (2022). Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning. Computational Intelligence and Neuroscience, 2022.en_US
dc.identifier.urihttps://hdl.handle.net/20.500.12846/702
dc.description.abstractBreast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.en_US
dc.language.isoengen_US
dc.publisherHindawi Publishing Corporationen_US
dc.relation.isversionof10.1155/2022/8904768en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectRejective Multiple Testen_US
dc.subjectNeural-Networken_US
dc.subjectGradient Descenten_US
dc.subjectMammogramsen_US
dc.subjectApproximationsen_US
dc.subjectReddedici Çoklu Testen_US
dc.subjectSinir Ağıen_US
dc.subjectGradyan İnişen_US
dc.subjectMamogramlaren_US
dc.subjectAblehnender Mehrfachtesten_US
dc.subjectNeuronale Netzeen_US
dc.subjectGradientenabstiegen_US
dc.subjectMammogrammeen_US
dc.titleClassification of breast cancer histopathological images using DenseNet and transfer learningen_US
dc.typearticleen_US
dc.relation.journalComputational Intelligence and Neuroscienceen_US
dc.contributor.authorID0000-0001-8776-3032en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.departmentTAÜ, Fen Fakültesi, Enerji Bilimi ve Teknolojileri Bölümüen_US
dc.contributor.institutionauthorUyaver, Şahin
dc.identifier.wosqualityN/Aen_US
dc.identifier.scopusqualityN/Aen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster