2022 | Excellent | Health | Pakistan | SDG3
AI-based DICOM viewer

Company or Institution

Medical Imaging and Diagnostics lab, NCAI







Sustainable Development Goals (SDGs)

SDG 3: Good Health and Well-being

General description of the AI solution

We have developed an AI-based DICOM viewer that helps radiologists analyse and diagnose breast cancer, brain cancer, and tuberculosis in mammogram, chest X-ray, and brain MRI images, respectively, with the help of deep learning models. The diagnosis is in the form of detection and segmentation of the DICOM images.
The DICOM viewer for mammograms locates the lesion, categorises it as benign, malignant, or normal, and also rates the breast density according to the BI-RADS assessment score. For Brain Tumour detection, our AI model in the DICOM viewer segments the tumour region into four common types of tumours and also predicts the survival days on the basis of the severity of the tumour. And lastly, for the chest X-rays, DICOM viewer diagnoses whether the patient's lungs are normal, sick with other diseases, or if s/he has tuberculosis.

Why an AI-based DICOM viewer?
According to the research, radiologists’ accuracy in evaluating radiology images lies between 50 and 70 percent. It has also been observed that different radiologists' reports on the same image contain different findings. In this case, a 2nd and highly accurate opinion is crucial for proper diagnosis and treatment.
Radiology report writing is another hectic task. Our DICOM viewer can generate radiology reports automatically without requiring extra effort. The report contains the details of the patient, the disease report, the radiologist's opinion or suggestions, and the patient’s radiology image with marked disease parts.
Our DICOM viewer will save radiologists' time and assist them in analyzing images in less time. Also, a second opinion from our DICOM viewer will help them in taking decisions for each case and will make them more confident. Eventually, a large number of patients can be diagnosed each day as compared to manual diagnosis.


Usman, M., Zia, T., & Tariq, A. (2022). Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography. Journal of digital imaging, 1-18.

Zia, T., Murtaza, S., Bashir, N., Windridge, D., & Nisar, Z. (2022). VANT-GAN: adversarial learning for discrepancy-based visual attribution in medical imaging. Pattern Recognition Letters, 156, 112-118.

Nawaz, M., Al-Obeidat, F., Tubaishat, A., Zia, T., Maqbool, F., & Rocha, A. (2022). MDVA-GAN: multi-domain visual attribution generative adversarial networks. Neural Computing and Applications, 1-16.

Fiaz, K., Madni, T. M., Anwar, F., Janjua, U. I., Rafi, A., Abid, M. M. N., & Sultana, N. (2022). Brain tumor segmentation and multiview multiscale‐based radiomic model for patient's overall survival prediction. International Journal of Imaging Systems and Technology, 32(3), 982-999.

Fatima, A., Madni, T. M., Anwar, F., Janjua, U. I., & Sultana, N. (2022). Automated 2D Slice-Based Skull Stripping Multi-View Ensemble Model on NFBS and IBSR Datasets. Journal of Digital Imaging, 35(2), 374-384.


Public Exposure

HPC resources and/or Cloud Computing Services


International Research Centre
on Artificial Intelligence (IRCAI)
under the auspices of UNESCO 

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana



The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan