Decision Making in Surveillance of High-Grade Gliomas Using Perfusion MRI as Adjunct to Conventional MRI and Artificial Intelligence.

Copyright © 2019 by American Society of Clinical Oncology
Journal of Clinical Oncology. 2019 May;37(15)_suppl doi: 10.1200/JCO.2019.37.15_suppl.2054

Abstract

BACKGROUND:
Surveillance of High-Grade Gliomas (HGGs) remains a major challenge in clinical neurooncology. Histopathological validation is not an option during the course of disease and imaging surveillance suffers from ambiguous features of both disease progression and treatment related changes. This study aimed to differentiate between Pseudoprogression (PsP) and Progressive Disease (PD) using an artificial intelligence (support vector machine – SVM) classification algorithm.
METHODS:
Two groups of patients with histologically proven HGGs were analysed, a group with a single time point DSC perfusion MRI (45 patients) and a group with multiple time point DSC perfusion MRI (19 patients). Both groups included conventional MRI studies prior and after each perfusion MRI. This study design aimed to replicate decision making in clinical practice including multiple previous studies for each patient. SVM training was performed with all available MRI studies for each group and classification was based on different feature datasets from a single or multiple (subtracted features) time points. Classification accuracy comparisons were performed by calculating prediction error rates for different feature datasets and different time point analyses.
RESULTS:
Our results indicate that the addition of multiple time point perfusion MRI combined with structural (conventional with gadolinium-enhanced sequences) MRI features results in optimal classification performance (median error rate: 0.016, lowest value dispersion). Subtracted feature datasets improved classification performance, more prominently when the final and first perfusion studies were included in the analysis. On the contrary, in the single time point group analysis, structural feature-based classification performed best (median error rate: 0.012).
CONCLUSIONS:
Validation of our results with a larger patient cohort may have significant clinical importance in optimising imaging surveillance and clinical decision making for patients with HGG.

Radiomics in Clinical Trials – The Rationale, Current Practices, and Future Considerations

Radiomics involves deep quantitative analysis of radiological images for structural and/or functional information. – It is a phenomic assessment of disease to understand lesion microstructure, microenvironment and molecular/cellular function. – In oncology, it helps us accurately classify, stratify and prognosticate tumors based on if, how and when they transform, infiltrate, involute or metastasize, – Utilizing radiomics in clinical trials is exploratory, and not an established end-point. – Integrating radiomics in an imaging-based clinical trials involves a streamlined workflow to handle large datasets, robust platforms to accommodate machine learning calculations, and seamless incorporation of derived insights into outcomes matrix.

Augmented Versus Artificial Intelligence for Stratification of Patients with Myositis

With interest we read the recent article by Pinal-Fernandez and Mammen,1 which comments on the paper by Spielmann et al2 and to a lesser extent on the contribution by Mariampillai et al3 4 and raises concerns about the artificial intelligence (AI)-driven approach used to define subgroups of patients with idiopathic inflammatory myopathy (IIM).

To illustrate this, Pinal-Fernandez and Mammen constructed a library of 1000 observations and selected the four variables using a multivariate normal distribution, thus finding a similar clustering as in the original paper by Spielmann et al.2 We share some of the concerns about unsupervised learning techniques raised by Pinal-Fernandez and Mammen.1 In this letter, we would like to highlight several aspects related to AI-driven methodologies.

Machine learning (ML) is a subset of AI that enables a computer to make decisions based on the large dataset. When applied to clustering, it will always give an ‘optimal’ solution for the number of clusters ‘present’ in a dataset. However, it is up to the human user’s discretion to determine whether those clusters exist. An ML algorithm determines a number of clusters by separating the datasets into the subgroups through a process of optimising (1) separation between each cluster to its greatest and (2) ensuring that within a cluster, the distance to the cluster centre for each point is the smallest. Such an algorithm is essentially trying to identify a number of optimal clusters that allow each cluster to be distinct from the others. The goal is to have tight individual clusters that are very distinguishable from the others. In any dataset, the algorithms will present an optimal solution to those or similar criteria, but it does not always mean those clusters are truly significant or meaningful.

Visualising the clusters using dimensionality reduction techniques such as principal component analysis or t-distributed stochastic neighbour embedding is vital for this process, in addition to more quantitative methods such as comparing intracluster variation, intercluster variation and silhouette scoring. That is why researchers using ML should ideally be ‘bilingual’ and understand both the mathematics and algorithms, as well the science and clinical meaning behind the results.

To conclude, we emphasise that, no doubt, ML has the potential to improve the stratification of patients with IIM if certain concepts of data science are followed as also pointed out by a task force of the European League Against Rheumatism for big data and AI.5 ML relies on large, standardised and curated datasets that require large patient cohorts. Due to the rarity of IIM, larger patient cohorts (such as the MyoNet/EuroMyositis)6 are required to generate quality data. Once larger and curated datasets are available, the ML approach is a powerful alternative to human judgement and can improve future classification criteria for IIM.4 7 8Today, we argue for the use of ML alongside expert decision, thus relying on augmented judgement when making the final decision on patient stratification especially when building AI-based models. Augmented intelligence has the potential for improved patient stratification in IIM.

Decision Making in Surveillance of High-Grade Gliomas Using Perfusion MRI as Adjunct to conventional MRI and Artificial Intelligence.

IAG & UCL poster for the 2019 ASCO Annual Meeting

Abstract

BACKGROUND:
Surveillance of High-Grade Gliomas (HGGs) remains a major challenge in clinical neurooncology. Histopathological validation is not an option during the course of disease and imaging surveillance suffers from ambiguous features of both disease progression and treatment related changes. This study aimed to differentiate between Pseudoprogression (PsP) and Progressive Disease (PD) using an artificial intelligence (support vector machine – SVM) classification algorithm.
METHODS:
Two groups of patients with histologically proven HGGs were analysed, a group with a single time point DSC perfusion MRI (45 patients) and a group with multiple time point DSC perfusion MRI (19 patients). Both groups included conventional MRI studies prior and after each perfusion MRI. This study design aimed to replicate decision making in clinical practice including multiple previous studies for each patient. SVM training was performed with all available MRI studies for each group and classification was based on different feature datasets from a single or multiple (subtracted features) time points. Classification accuracy comparisons were performed by calculating prediction error rates for different feature datasets and different time point analyses.
RESULTS:
Our results indicate that the addition of multiple time point perfusion MRI combined with structural (conventional with gadolinium-enhanced sequences) MRI features results in optimal classification performance (median error rate: 0.016, lowest value dispersion). Subtracted feature datasets improved classification performance, more prominently when the final and first perfusion studies were included in the analysis. On the contrary, in the single time point group analysis, structural feature-based classification performed best (median error rate: 0.012).
CONCLUSION:
Validation of our results with a larger patient cohort may have significant clinical importance in optimising imaging surveillance and clinical decision making for patients with HGG.

Technical Challenges in the Clinical Application of Radiomics

Radiomics is a quantitative approach to medical image analysis targeted at deciphering the morphologic and functional features of a lesion. Radiomic methods can be applied across various malignant conditions to identify tumor phenotype characteristics in the images that correlate with their likelihood of survival, as well as their association with the underlying biology. Identifying this set of characteristic features, called tumor signature, holds tremendous value in predicting the behavior and progression of cancer, which in turn has the potential to predict its response to various therapeutic options. We discuss the technical challenges encountered in the application of radiomics, in terms of methodology, workflow integration, and user experience, that need to be addressed to harness its true potential.