Abstract
VisualQC is a medical imaging software library aimed to enable and improve certain challenging aspects of neuroimaging quality control (niQC). VisualQC is purpose-built for rigorous niQC and aims to greatly reduce the tediousness of manual visual QC. It achieves this by seamlessly (1) presenting relevant composite visualizations while alerting the user of any outliers based on advanced machine learning algorithms, (2) offering an easy way to record the ratings and notes, and (3) making it easy to quickly navigate through a large number of subjects. VisualQC offers a modular and extensible framework, to allow for solving a wide diversity of visual niQC tasks along with some assistive automation. We demonstrate this by showing a few common but diverse QC use-cases targeting visual review and rating of (1) the raw image quality for structural and functional MRI scans, (2) accuracy of anatomical segmentations either via Freesurfer or a generic voxel-based segmentation algorithm, (3) accuracy of the alignment between two images (registration algorithms), and (4) accuracy of defacing algorithms to protect patient privacy. We believe this modular and extensible API/classes will encourage the community to customize it for their own needs and with their own visionary ideas and encourage them to share their implementation with the community to improve the quality of neuroimaging data and analyses.
Keywords: Quality control; visualization; QC; neuroimaging; niQC; quality assurance; QA; medical imaging; MRIqc; software library; outlier detection;
DOI: 10.52294/e130fcd2-ce83-4222-856d-c82022013a50
STATEMENT OF NEED
Neuroimaging data, be it a raw acquisition (fMRI or T1w MRI) or derived outputs (cortical thickness, subcortical segmentation), are complex, and hence assessing their quality requires visual inspection manually. To ensure the assessment is accurate, this inspection needs to be comprehensive beyond a few random slices, to review the full scope of the object being inspected, which often requires reviewing multiple views and many slices. Often, looking at the data by itself is not sufficient to spot subtle errors, wherein statistical measurements (across space or time) can assist greatly in rating the quality of image or severity of artefacts spotted.
This manual process, in its simplest form, is cumbersome and time consuming. Without any assistive tool, it requires a long series of slow manual actions (such as opening an MRI volume, followed by its anatomical segmentation and/or cortical surface overlays etc) and color them appropriately, and manually reviewing one slice at a time, navigate through many slices, and record your rating carefully in a complex spreadsheet. This process often needs to be repeated for every single subject in a large dataset. In some even more demanding tasks (such as assessing the accuracy of cortical thickness, e.g., generated by Freesurfer, or reviewing an Echo Planar Imaging (EPI) sequence), you may need to review multiple types of visualizations (such as surface-rendering of pial surface or carpet plots with specific temporal stats in fMRI), in addition to voxel-wise data. Without an assistive tool, this process allows too many human mistakes, as the user flips through 100s of subjects over many weeks jumping through multiple visualization software and spreadsheets. Moreover, careful use of outlier detection techniques on data set-wide statistics (across all the subjects in a data set) can help us identify subtle errors (such as a small region of interest (ROI) with unrealistic thickness value) that would otherwise go undetected.
VisualQC, purpose-built for rigorous neuroimaging quality control (niQC), aims to reduce this laborious process to a single command via the seamless presentation of relevant composite visualizations, alerting the user of any outliers, offering an easy way to record the ratings, and helping users quickly navigate through 100s of subjects with ease. It has been used and cited in diverse use-cases (1–9).
Neuroimaging researchers have already developed assistive tools for different quality-control (QC) and quality-assurance (QA) tasks and modalities over the years, some visual, some interactive, some automatic, and others in between. We collect and categorize these tools in the Resources section of the International Neuroinformatics Coordinating Facility (INCF) Special Interest Group (SIG) on niQC (10) on this website https://incf.github.io/niQC/tools. The relevant available citations are Refs. (8,11–23,26).
TARGET AUDIENCE
VisualQC library is designed to assist in several QC use-cases in the context of medical imaging research wherever a visual review is a key component. Given the diversity of neuroimaging QC tasks and small variations in how they are used in different applications, it is also designed to be modular and easily extensible to let users customize it to meet their needs as well as suit their preferences. While enabling accurate and quick visual assessment of the data is the primary goal, VisualQC also performs many related functions to reduce the burden of the rater. They include presenting them with prompts for additional scrutiny (e.g., via outlier alerts), customizable rating system, and free-form note taking, which is all well integrated into the workflow. Such integration not only reduces the burden but also prevents mistakes in rating the wrong items.
The target audience or userbase for this QC library, as the name implies, is the medical imaging research community needing to perform the visual review, rating, and/or QC. The size of the data set to process is not a factor, as the purpose-built integration offered reduces the burden for the human rater even if they were reviewing only 10 subjects, although it must be mentioned most data sets nowadays are orders of magnitude larger.
USE-CASES SUPPORTED
VisualQC currently supports the following use-cases:
• Functional MRI scans
о raw scan quality
о identification of artefactual frames (e.g., with motion, spikes, etc.)
• Freesurfer QC
о cortical parcellations (accuracy of pial/white surfaces on T1w MRI)
о subcortical segmentations (voxel-wise anatomical accuracy)
• Structural T1w MRI scans
о overall quality rating
о artefact identification and severity rating
• Volumetric segmentation accuracy (against T1w MRI)
о subcortical structures,
о tissue segmentation (grey matter, white matter, or cerebrospinal fluid (CSF) masks)
о or any other generic volumetric segmentation
• Registration quality (spatial alignment)
о within a single modality (multimodal support coming), for example, T1w to T1w, EPI to EPI
• Defacing or skull-stripping algorithm accuracy
The design of the library and existing classes enables us to support new use-cases relatively easily, and hence we plan to offer the following additional use-cases and/or features in the future when we receive more resources and contributions:
• Functional MRI scans
о quality control of the impact of different pre-processing steps
• Freesurfer QC
о ability to correct the parcellation errors identified
• Structural T1w MRI scans
о artefact-specific advanced visualizations
о Volumetric segmentation accuracy (against T1w MRI)
• Cross-modal/multimodal registration quality (accuracy of spatial alignment)
о for example, alignment between T1w and EPI, PET and T1w, and so on
We also strongly encourage everyone to contribute what they can to improving the different existing modules or creating new ones to fit their needs and preferences. Some specific suggestions we request help with include the ability to generate 3D surface visualizations, for example, from pial and white surfaces of Freesurfer output, fully natively in Python without relying on external calls to non-python dependencies to keep it more seamless and manageable.
SOFTWARE WORKFLOW
The graphical abstract in Figure 1 outlines the VisualQC workflow in broad strokes in Panel (A), which captures the key features of this library and their order. It is worth noting this is a generic workflow that can be employed towards a diversity of visual QC for any medical imaging modality, including but not limited to neuroimaging. The different use-cases we already support are shown in Panel (B). Finally, an example interface showing how the library can be utilized to create sophisticated multi-layer data visualization targeting easy review of data quality is shown in Panel (C).

Fig. 1. This graphical abstract outlines the VisualQC workflow in broad strokes e.g., Panel (A) capturing the key features and their order. It is worth noting this is a generic workflow that can be employed towards a diversity of visual QC for any medical imaging modality, including but not limited to neuroimaging. The different use-cases we already support are shown in Panel (B). Finally, an example interface showing how the library can be utilized to create sophisticated multi-layer data visualization targeting easy review of data quality is shown in Panel (C).
There are a few different versions of this being cited by users; they include the original deposition on Zenodo (24) for the purposes of obtaining a DOI as well as the namesake protocol for Freesurfer parcellations (25). We request the users to cite this paper when citing the software library specifically.
TESTING AND VALIDATION
As this library is primarily geared to be an interactive graphical user interface (GUI), the testing of its functionality is performed manually by the developers. Typical testing includes running the different modules on a few example data sets included in the repository and ensuring various features behave as they are expected to, for example, checking the accuracy of the various data visualization layers, keyboard and mouse actions carrying out the corresponding actions, and widgets behaving correctly. While this could be automated via sophisticated GUI testing frameworks, we are unable to do so at this moment for lack of sufficient relevant expertise and the necessary time and resources to get it done. However, given its extensive usage by various users in different studies, the advertised features work as advertised to the best of our knowledge. We have fixed a few bugs reported by users and hackers over the years, and we will continue to do so.
RESOURCES AND SUPPORT
The software is maintained and supported via the open source and collaborative workflow currently hosted on github.com at URL: github.com/raamana/visualqc. Briefly, the software is fully open source, released under the Apache 2.0 licence. It is hosted on github.com and is version tracked. Users and developers can open issues at the library’s repository regarding any issues relating to the software including but not limited to bugs, feature requests, new contributions, and any open discussions. The guidelines for contributing to VisualQC are noted in the CONTRIBUTING.rst file in the root folder. We follow the prevalent best practices in coding styles and the Python Enhancement Proposal (PEP) 8 style guide for formatting requirements. The developers and maintainers of the software respond to these events as and when they are able to. This library follows the Contributor Covenant Code of Conduct (version 1.4) noted in the CODE_OF_CONDUCT.md file in the root folder.
While this paper is intended to be the official citation for the software library, we already have a few different versions of the software cited, which include the early deposition on Zenodo (24), and a protocol comparison study (25).
ACKNOWLEDGEMENTS
We would like to express gratitude and much thanks to Prof. Stephen Strother and members of the Strother Laboratory at the Rotman Research Institute, Baycrest Health Sciences in Toronto, ON, Canada, for their support of my ideas where the initial design and development of this software library occurred. We also would like to acknowledge the receipt of a scholarship award from Canadian Open Neuroscience Program (CONP), whose proposal is based partly on this software.
REFERENCES
1.Bottani S, Burgos N, Maire A, Wild A, Ströer S, Dormont D, et al. Automatic quality control of brain T1-weighted magnetic resonance images for a clinical data warehouse. ArXiv210408131 Cs Eess [Internet]. 2021 Apr 16 [cited 2021 Aug 12]; Available from: http://arxiv.org/abs/2104.08131
2.Chan NK, Gerretsen P, Chakravarty MM, Blumberger DM, Caravaggio F, Brown E, et al. Structural brain differences between cognitively impaired patients with and without apathy. Am J Geriatr Psychiatry. 2021 Apr;29(4):319–32.
3.Dufford AJ, Evans GW, Liberzon I, Swain JE, Kim P. Childhood socioeconomic status is prospectively associated with surface morphometry in adulthood. Dev Psychobiol. 2021 Jul;63):1589–96.
4.Frässle S, Aponte EA, Bollmann S, Brodersen KH, Do CT, Harrison OK, et al. TAPAS: An open-source software package for translational neuromodeling and computational psychiatry. Front Psychiatry. 2021 Jun 2;12:680811.
5.Gadewar S, Zhu AH, Thomopoulos SI, Li Z, Gari IB, Maiti P, et al. Region specific automatic quality assurance for MRI-derived cortical segmentations. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) [Internet]. Nice, France: IEEE; 2021 [cited 2021 Aug 13]. pp. 1288–91. Available from: https://ieeexplore.ieee.org/document/9433755/
6.Monereo-Sánchez J, de Jong JJA, Drenthen GS, Beran M, Backes WH, Stehouwer CDA, et al. Quality control strategies for brain MRI segmentation and parcellation: Practical approaches and recommendations - insights from the Maastricht study. NeuroImage. 2021 Aug;237:118174.
7.Sampathkumar VR. ADiag: Graph neural network based diagnosis of Alzheimer’s disease. ArXiv210102870 Cs Eess [Internet]. 2021 Jan 8 [cited 2021 Aug 13]; Available from: http://arxiv.org/abs/2101.02870
8.Williams B, Lindner M. pyfMRIqc: A software package for raw fMRI data quality Assurance. J Open Res Softw. 2020 Oct 7;8:23.
9.Wiseman SJ, Meijboom R, Valdés Hernández M del C, Pernet C, Sakka E, Job D, et al. Longitudinal multi-centre brain imaging studies: guidelines and practical tips for accurate and reproducible imaging endpoints and data sharing. Trials. 2019 Dec;20(1):21.
10.Raamana PR, Strother SC, International Neuroinformatics Coordinating Facility (INCF). Special Interest Group (SIG) on NeuroImaging Quality Control (niQC). 2018; Available from: https://incf.github.io/niQC/
11.Bastiani M, Cottaar M, Fitzgibbon SP, Suri S, Alfaro-Almagro F, Sotiropoulos SN, et al. Automated quality control for within and between studies diffusion MRI data using a non-parametric framework for movement and distortion correction. NeuroImage. 2019 Jan 1;184:801–12.
12.Bollmann S, Kasper L, Pruessmann K, Barth M, Stephan K. Interactive and flexible MRI data evaluation: The uniQC toolbox [Internet]. Available from: https://github.com/CAIsr/uniQC
13.Connolly A, Halchenko Y. Automated capture of audio-visual stimuli into BIDS datasets [Internet]. Zenodo; 2022 [cited 2022 Mar 14]. Available from: https://zenodo.org/record/6354036
14.Esteban O, Birman D, Schaer M, Koyejo OO, Poldrack RA, Gorgolewski KJ. MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites. PLoS ONE. 2017 Sep 25;12(9):e0184661.
15.Hellrung L, van der Meer J, Bergert S, Sladky R, Pamplona GS, Scharnowski F, et al. Rtqc: An open-source collaborative framework for quality control methods in real-time functional magnetic resonance imaging. 2017 Nov 29 [cited 2021 Dec 27]; Available from: https://zenodo.org/record/1311610
16.Ito KL, Kumar A, Zavaliangos-Petropulu A, Cramer SC, Liew SL. Pipeline for Analyzing Lesions After Stroke (PALS). Front Neuroinformatics. 2018;12:63.
17.Keshavan A, Datta E, M. McDonough I, Madan CR, Jordan K, Henry RG. Mindcontrol: A web application for brain segmentation quality control. NeuroImage. 2018 Apr;170:365–72.
18.Keshavan A, Yeatman JD, Rokem A. Combining citizen science and deep learning to amplify expertise in neuroimaging. Front Neuroinformatics. 2019;13:29.
19.Klapwijk ET, van de Kamp F, van der Meulen M, Peters S, Wierenga LM. Qoala-T: A supervised-learning tool for quality control of FreeSurfer segmented MRI data. 2018 Aug 6 [cited 2018 Oct 25]; Available from: https://www.sciencedirect.com/science/article/pii/S1053811919300138
20.Leemans A, Jeurissen B, Sijbers J, Jones DK. ExploreDTI: a graphical toolbox for processing, analyzing, and visualizing diffusion MR data. Proceedings of the International Society for Magnetic Resonance in Medicine. 2009;17(1):3537.
21.Mutsaerts HJMM, Petr J, Groot P, Vandemaele P, Ingala S, Robertson AD, et al. ExploreASL: An image processing pipeline for multi-center ASL perfusion MRI studies. NeuroImage. 2020 Oct 1;219:117031.
22.Oguz I, Farzinfar M, Matsui J, Budin F, Liu Z, Gerig G, et al. DTIPrep: Quality control of diffusion-weighted images. Front Neuroinformatics [Internet]. 2014 Jan 30;8. Available from: http://journal.frontiersin.org/article/10.3389/fninf.2014.00004/abstract
23.Urchs S, Armoza J, Benhajali Y, Bellec P. "dashqc-fmri-an interactive web dashboard for manual quality control; 2018." In Sixth Biennial Conference on Resting State and Brain Connectivity, Montreal, Canada.
24.Raamana PR. VisualQC: Assistive tools for easy and rigorous quality control of neuroimaging data. 2018 Apr 2 [cited 2020 Mar 13]; Available from: https://zenodo.org/record/1211365
25.Raamana PR, Theyers A, Selliah T, Bhati P, Arnott SR, Hassel S, et al. Visual QC protocol for FreeSurfer cortical parcellations from anatomical MRI [Internet]. Neuroscience; 2020 Sep [cited 2020 Sep 10]. Available from: http://biorxiv.org/lookup/doi/10.1101/2020.09.07.286807
26.Oguz I, Farzinfar M, Matsui J, Budin F, Liu Z, Gerig G, et al. DTIPrep: Quality control of diffusion-weighted images. Front Neuroinformatics [Internet]. 2014 Jan 30;8. Available from: http://journal.frontiersin.org/article/10.3389/fninf.2014.00004/abstract
This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits authors to copy and redistribute the material in any medium or format, remix, transform and build upon material, for any purpose, even commercially.