GET THE APP

The Use of Convolutional Neural Networks in Analyses of X-Rays Taken From Coffins to Find COVID-19
..

Journal of Bioengineering & Biomedical Science

ISSN: 2155-9538

Open Access

Perspective - (2022) Volume 12, Issue 10

The Use of Convolutional Neural Networks in Analyses of X-Rays Taken From Coffins to Find COVID-19


*Correspondence: Jeongho Yoon, Department of Electrical Engineering, Soonchunhyang University, Asan, Korea, Email:
1Department of Electrical Engineering, Soonchunhyang University, Asan, Korea

Received: 26-Sep-2022, Manuscript No. jbbs-23-87917; Editor assigned: 28-Sep-2022, Pre QC No. P-87917; Reviewed: 12-Oct-2022, QC No. Q-87917; Revised: 18-Oct-2022, Manuscript No. R-87917; Published: 26-Oct-2022 , DOI: 10.37421/2155-9538.2022.12.328
Citation: Yoon, Jeongho. “The Use of Convolutional Neural Networks in Analyses of X-Rays Taken From Coffins to Find COVID-19.” J Bioengineer & Biomedical Sci 12 (2022): 328.
Copyright: © 2022 Yoon J. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

From late 2019 to the present, one of the most important factors in the fight against the COVID- 19 epidemic has been the development of colorful webbing styles that make the presence of the virus as easily and directly visible as possible. The use of chest X-rays (CXRs) to identify anomalies that occur simultaneously with a COVID-19-infected patient is one similar method. CXRs tend to be less accurate than the conventional RT-PCR test, despite producing significant results more quickly. In order to better distinguish COVID- 19 from CXRs, our investigation delves into the workings of computer vision in recognition of this problem. Convolutional neural networks (CNNs) demonstrate their ability to fluently and directly identify whether a case is infected with COVID- 19 in a matter of seconds when combined with an extensive image database of CXRs of healthy cases, cases with pneumonia that was not caused by COVID-19, and cases that were positive for the virus [1].

Description

We performed transfer literacy and trained three of our own models using the infrastructures of three well-tested CNNs—VGG-16, ResNet50, and MobileNetV2. We also compared and varied their differing rigor, rigor, and edge in correctly labeling cases with and without COVID- 19. Ultimately, all of our models were able to directly classify at least 94 CXRs, with some models doing better than others; The distinct infrastructures that each of our models adopted from the three distinct CNNs were largely to blame for these disparities in performance. As depicted in Figure 1, the COVID-19 pandemic is one of the most lethal infectious diseases that has ever afflicted our planet, with over 180 million confirmed cases and nearly 4 million losses. A large number of scientists with moxie in contagious complaint have been steadily developing COVID- 19 discovery strategies for over a half-century [2].

The RT-PCR test, which can take up to two days to produce results, is the most common method for relating COVID-19 infected cases. Nonetheless, the results of this test may need to be confirmed by a secondary test, and its accuracy varies. Utilizing CXRs to identify anomalies in the burial area that may indicate the presence of COVID-19 is an essential system. While CXRs are more widely available and more effective than the traditional RT-PCR test, their accuracy is generally lower. However, a comprehensive review of numerous earlier studies reveals that, out of a wide variety of tested bracket styles, using well-erected CNNs is the best method for bridging the accuracy gap when using CXRs to diagnose COVID-19. The experimenters in a recent study of COVID-19 discovery through the use of CNNs modified their attentiongrounded model using the base armature of VGG16, a well-established CNN; To put it another way, their model can be used to break down the connections in the CXRs' frequently overlooked areas of interest in order to more directly criticize COVID- 19 [3].

Their model could achieve a maximum delicateness of 87.49. In a similar study, a group of researchers fine-tuned colorful models using cutting-edge CNNs like MobileNetV2 and ResNet50. They trained and tested the models on a large CXR dataset and found that they were suitable for bracket rigors of 94 and higher. A new CNN design for the discovery of COVID-19 cases from CXRs, COVID-Net, was developed by an exploration platoon and achieved a maximum efficacy rate of 93.3 in yet another study. There are a lot of pre-existing studies on applying machine literacy to complaint discovery, specifically COVID-19, that not only show the results of integrating CNNs in the CXR bracket but also show the promise that similar models show in the drug and complaint opinion field as a whole. Our primary objective is to investigate the subtle differences between a number of well-established CNNs and to dock the effect of overfitting on CNNs [4,5].

Conclusion

We would like to compare and contrast the computational perfection and efficacy of the three distinct CNNs—VGG16, ResNet 50, and MobileNetV2— within the context of the image bracket in order to ultimately determine the CNN armature that is the most effective. In order to fashion fit our data, we will source and modify the infrastructures of VGG16, ResNet50, and MobileNetV2, as well as estimate the operating models against the same dataset to determine which base architecture is most effective at detecting COVID- 19 from CXRs. In particular, we will add five new layers to each CNN to create a hierarchical corruption of our input data in order to improve the models' delicacy and specificity. By enhancing the CNNs' capacity to distinguish singlecolor CXR features, we anticipate that our new strategy will also lessen the impact of overfitting our model to particular datasets. In general, we intend to investigate and dissect colorful CXR-completing CNNs in order to identify the most efficient, accurate, and readily available alternative to the conventional RT-PCR test, which takes more time and is less accessible.

References

  1. Howard, Andrew G., Menglong Zhu, Marco Andreetto and Hartwig Adam, et al. "Mobilenets: Efficient convolutional neural networks for mobile vision applicationsarXiv preprint arXiv (2017).
  2. Google Scholar, Indexed at

  3. Islam, Nayaar, Sanam Ebrahimzadeh, Jean-Paul Salameh and Marissa Absi, et al. "Thoracic imaging tests for the diagnosis of COVID‐19." Cochrane Database Syst. Rev 3 (2021).
  4. Google Scholar, Crossref, Indexed at

  5. Kelly, Christopher J., Alan Karthikesalingam, Greg Corrado and Dominic King, et al. "Key challenges for delivering clinical impact with artificial intelligence.BMC Med 17 (2019): 1-9.
  6. Google Scholar, Crossref, Indexed at

  7. Peng, Jie, Shuai Kang, Yikai Xu and Jing Zhang, et al. "Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging." Eur Radiol 30 (2020): 413-424.
  8. Google Scholar, Indexed at, Crossref

  9. Rubin, Geoffrey D., Christopher J. Ryerson, Linda B. Haramati and Neil W. Schluger, et al. "The role of chest imaging in patient management during the COVID-19 pandemic: A multinational consensus statement from the Fleischner Society." Chest 158 (2020): 106-116.
  10. Google Scholar, Crossref, Indexed at

Google Scholar citation report
Citations: 307

Journal of Bioengineering & Biomedical Science received 307 citations as per Google Scholar report

Journal of Bioengineering & Biomedical Science peer review process verified at publons

Indexed In

 
arrow_upward arrow_upward