An Online Automatic Corona Diagnose System Based on Chest X-ray Images
This is a detailed paper for my online COVID-19 diagnosis system.
Abstract. An outbreak of SARS-CoV-2 shocked healthcare systems around the world. It began in December 2019 in Wuhan, China, and spread out in over 120 countries in less than three months. Imaging technologies helped in COVID-19 fast and reliable diagnosis. CT-Scan and X-ray imaging are popular methods. This study is focused on X-ray imaging, concerning limitations in small cities to access CT-Scan and its costs. Using deep learning models helps to diagnose precisely and quickly. We aimed to design an online system based on deep learning, which reports lung engagement with the disease, patient status, and therapeutic guidelines. Our objective was to relieve pressure on radiologists and minimize the interval between imaging and diagnosing. VGG19, VGG16, InceptionV3, and ResNet50 were evaluated to be considered as the main code of the online diagnosing system. VGG16, with 98.92% accuracy, achieved the best score. VGG19 performed quite similarly to VGG16. VGG19, InceptionV3 and ResNet50 obtained 98.90, 71.79 and 28.27 subsequently.
SARS-CoV-2 spread quickly all over the world . This virus causes lung infection and pneumonia in most of patients. Reverse transcription-polymerase chain reaction (RT-PCR) is a reliable and trusted way to ensure infection with COVID-19 . But this method was time-consuming, and some kits do not have enough accuracy. Common symptoms are fever and cough, in addition to dyspnea, headache, and fatigue .
Apostolopoulos and Mpesiana  evaluated five convolutional neural networks (CNN) to classify three classes, including normal, pneumonia, and COVID-19 lung. They reported MobileNet v2 as an effective model.
Hemdan et al.  developed a framework called COVIDX-Net to assist the medical community. In this study, VGG19 and DenseNet outperformed other models.
In another study, Narin et al.  studied on three CNN models, including ResNet50, InceptionV3, and Inception-ResNetV2, which ResNet50 model provided 98% accuracy among others.
CT-Scan is not available in most of the small cities, so X-ray is a suitable alternative for imaging lungs. This study aimed to develop a machine learning model based on X-ray images. For this purpose, we have created an online system that recognizes COVID-19 by analysing the images from the coronal plane. The key feature is that all the users can use the latest updated model quickly. The output of our system is lungs engagement with the disease, patient status, and therapeutic guidelines. VGG16, VGG19, InceptionV3, and ResNet50 were evaluated and finally VGG16, with 98.92% accuracy selected as the main core of our online system .
The procedure of deploying model will be explained in the second section of the manuscript. The third part contains the discussion and results. Finally, the conclusion of the study represented in section four.
2. Material and methods
Figure 1 contains a flowchart to achieve an online system. The main outcomes are lung engagement with the disease, patient status, and therapeutic guidelines.
First, VGG16, VGG19, Inception V3, and ResNet50 were trained on confirmed COVID-19 X-ray images and normal lung images, then the trained model with the highest efficiency deployed on Google Cloud Platform (GCP) and utilized via a python based API. Images and results transferred using JSON. Specific details will be discussed in the following.
2.1 Train Model
A public dataset provided by Cohen et al.  contains confirmed lung images with COVID-19 along with images from hospitals of the Ardabil province of Iran that were used to create a dataset for the machine learning model. Fine-tuned VGG16, VGG19, Inceptionv3, and ResNet50 CNNs were assessed.
VGG16 and VGG19 were proposed by Simonya and Zisserman . These models stacked convolutional layers together. VGG16 has 138.4 million parameters, while VGG19 has 143.7 million parameters. With respect to the size of parameters, training these networks on the small size of the COVID-19 dataset is not efficient. So Fine-tuning these networks helps to develop models sufficient for this study.
He et al.  suggested ResNet50 in 2015 with 25.6 million parameters and 152 layers, which is 8x deeper than VGG networks.
Inception-V3 was proposed by Szegedy et al.  in 2015, Around 23 million parameters created this network, which achieved 5.6% top-5 error for single frame evaluation on ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 challenge.
Fully connected layer added on top of the pre-trained models. Four hundred confirmed positive COVID-19 subjects with 1000 healthy lungs X-ray images from Radiological Society of North America (RSNA) pneumonia detection challenge  created the first version of the model.
The backbone of this system is uploaded on the Google Cloud Platform (GCP), which provides serverless computing services. This feature helps to create and execute models efficiently. The connection between back-end and front-end maintained by JSON. To avoid storage problems, input images do not restore. The structure of API is easy to understand. When an encoded image fed into API, python core will decode and preprocess image. This image will be analysed, and three outputs will be shown on the website: lung engagement with the disease, patient status, and therapeutic guidelines. The main core reports the lung engagement with the disease, and based on this key outcome, other analyses will be done and shown to the user.
3. Results and Discussion
The models were implemented with python 3.7 on a laptop equipped with an Intel Core i7-9750H processor, Nvidia GTX 1650 2GB VGA, and 24GB RAM. Four interested networks examined by the same epoch value and dataset. The training and testing ration are 80:20 percent. So we had 280 test and 1120 train images. The size of input images is 244×244 except InceptionV3, which was 299×299. The results reported in Table 1 in terms of accuracy, sensitivity (known as recall), and specificity. These metrics calculated by the values of the confusion matrix, which contains true positive (TP), true negative (TN), false-negative (FN) and false positive (FN) values. The equation to obtain metrices is expressed equations 1-3:
VGG 16 achieved the maximum accuracy of 98.42%, among others. This model holds the highest sensitivity and specificity, which indicates its efficiency and applicability.
|Network||Accuracy (%)||Sensivity (%)||Specifity (%)|
The confusion matrix of these networks is reported in Table 2. This matrix consists of true positive, false negative, false positive, and true negative values.
|Network||True Positive||False Negative||False Positive||True Negative|
The accuracy/loss and epoch plots are illustrated in figure 2-5. These reports include training accuracy, validation accuracy, training loss, and validation loss.
Receiver operating characteristic (ROC) curve illustrated in figure 6. If the curve follows the True-Positive border and then the top border of the ROC space closely, the test is more accurate.
Precision and F1-score are three other metrices to evaluate networks. The equation to obtain these metrices is expressed equations 4-5:
These results are reported in Table 3.
Machine learning algorithms growth quickly in recent decades. A combination of image processing and machine learning created efficient convolutional neural networks that have significant applications such as in auto-driven cars. This interdisciplinary field can be used in medical science. The outbreak of SARS-CoV-2 showed us necessities in the medical system. By the increase in the number of people in hospitals and demanding for CT-Scan and X-ray, The time in diagnosing and confirming the disease played a crucial role. This study evaluated four CNNs and selected the best as the main core of the online diagnosing system by deploying online machine learning application on the GCP. Online diagnosing platform is free and will update regularly. VGG16, with 98.82% accuracy, outperformed other models.
We thank everyone who is working in hospital and caring patients. This work would not have been possible without help of Ardabil University of Medical Sciences, Ardabil, Iran and Ardabil Science and Technology Park, Ardabil, Iran.
 Cohen, J., & Normile, D. (2020). New SARS-like virus in China triggers alarm, Science, vol. 367, no. 6475, pp. 234-235, 2020.
 Fang, Y., Zhang, H., Xie, J., Lin, M., Ying, L., Pang, P., & Ji, W. (2020). Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology, 200432.
 Wang, W., Tang, J., & Wei, F. (2020). Updated understanding of the outbreak of 2019 novel coronavirus (2019‐nCoV) in Wuhan, China. Journal of medical virology, 92(4), 441-447.
 Apostolopoulos, I. D., & Mpesiana, T. A. (2020). Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Physical and Engineering Sciences in Medicine, 1.
 Hemdan, E. E. D., Shouman, M. A., & Karar, M. E. (2020). Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images. arXiv preprint arXiv:2003.11055
 Narin, A., Kaya, C., & Pamuk, Z. (2020). Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849
 Online COVID-19 Diagnose System, http://cvd.imreza.ir
 Cohen, J. P., Morrison, P., & Dao, L. (2020). COVID-19 image data collection. arXiv preprint arXiv:2003.11597.
 Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556
 He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778)
 Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826)
 Stein, A., (2018) Pneumonia Dataset Annotation Methods. RSNA Pneumonia Detection Challenge Discussion. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723