• Users Online: 223
  • Print this page
  • Email this page


 
 
Table of Contents
ORIGINAL ARTICLE
Year : 2022  |  Volume : 2  |  Issue : 2  |  Page : 113-119

Prostate-specific Membrane Antigen Positron Emission Tomography (PSMA-PET) and Gleason grading system based Artificial Intelligence (AI) model in diagnosis and staging of prostate cancer


1 Department of Healthcare Solutions, Philips India Research, Manyata, Tech Park, Bengaluru, Karnataka, India
2 Department of Radiation Oncology, Healthcare Global Enterprises Ltd, Bengaluru, Karnataka, India

Date of Submission14-Sep-2022
Date of Decision04-Oct-2022
Date of Acceptance04-Oct-2022
Date of Web Publication06-Feb-2023

Correspondence Address:
Dr. Dinesh M Siddu
Healthcare Solutions-Research, Philips India Pvt. Ltd, Manyata Tech Park, Nagavara, Bengaluru - 560 045, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpo.jpo_15_22

Rights and Permissions
  Abstract 


Introduction: Prostate cancer is the second-most commonly occurring cancer and the fifth major cause of death among men worldwide. Early diagnosis and treatment planning are very crucial in reducing the mortality rate due to prostate cancer. Gleason grading is the most used prostate cancer prognosis tool by doctors for a long time. It is used to determine the aggressiveness of the cancer for planning treatment options. The process requires very trained pathologists to look at multiple biopsy samples under the microscope and assign a grade to the cancer based on its severity.
Methods: In this work, we are adding Gleason grading capabilities to prostate-specific membrane antigen positron emission tomography/ computed tomography (PSMA PET-CT) scans for tumor habitats and classify them as aggressive or indolent type. Tagging habitats with Gleason grade to categorize them as aggressive or indolent type helps in biopsy planning to extract the right tissue samples and tagging helps to target aggressive tumors during radiation therapy. We have developed a machine learning-based model to automatically classify tumor habitat regions of interest from PSMA PET and CT imaging data into Gleason grade groups of indolent and aggressive.
Results: The 68Ga-PSMA PET-CT scans are very effective in detecting the presence of different habitats within the tumor with distinct volumes, each with a specific combination of flow, cell density, necrosis, and edema. Habitat distribution through tumor heterogeneity analysis in patients with prostate cancers can be enabled to discriminate between cancers that progress quickly and those that are more indolent.
Conclusion: We have developed an AI model to classify habitat tumors present in the gross tumor volume into indolent and aggressive types based on the ground truth generated using Gleason grade groups on pathology samples by Healthcare Global Cancer Hospital, Bangalore, India. Habitat analysis helps radiotherapists to target active tumor cells within gross tumor volume and helps in selecting the right tissue for performing biopsy. The currently developed model is performing with an overall accuracy of 90% on test data.

Keywords: Aggressive, Gleason grade groups, indolent, prostate cancer, prostate-specific membrane antigen positron emission tomography–computed tomography, tumor habitats, tumor heterogeneity


How to cite this article:
Siddu DM, Pawar A, Lohith G, Sekar K. Prostate-specific Membrane Antigen Positron Emission Tomography (PSMA-PET) and Gleason grading system based Artificial Intelligence (AI) model in diagnosis and staging of prostate cancer. J Precis Oncol 2022;2:113-9

How to cite this URL:
Siddu DM, Pawar A, Lohith G, Sekar K. Prostate-specific Membrane Antigen Positron Emission Tomography (PSMA-PET) and Gleason grading system based Artificial Intelligence (AI) model in diagnosis and staging of prostate cancer. J Precis Oncol [serial online] 2022 [cited 2023 Mar 28];2:113-9. Available from: https://www.jprecisiononcology.com//text.asp?2022/2/2/113/369210




  Introduction Top


Prostate cancer is the second major cause of cancer death among men in the United States. It is also the second-most commonly occurring cancer and the fifth major cause of death among men worldwide.[1],[2],[3],[4] Although prostate cancer is one of the most commonly occurring cancers, if it is detected at an early stage, survival rates of the patients are high due to the slow progression of the disease.[3] Therefore, early detection and effective surveillance are essential to improve patient survival rates.[4]

Over 90% of prostate cancers overexpress prostate-specific membrane antigen (PSMA), and these tumor cells may be accurately tagged for diagnosis by the 68Ga-PSMA positron emission tomography/computed tomography (68Ga-PSMA PET/CT) imaging. This novel molecular imaging modality appears clinically to have superseded CT, and appears superior to MR imaging, for the detection of metastatic disease. The 68Ga-PSMA PET/CT has the ability to identify the stage of prostate cancer with high reliability at presentation and can help in the selection of optimal treatment plans.[3],[4]

Different habitats are present within the tumor with distinct volumes, each with a specific combination of flow, cell density, necrosis, and edema.[2] Habitat distribution in patients with prostate cancers can be enabled to discriminate between cancers that progress quickly and those that are more indolent. Classifying habitats based on the Gleason's grade information from pathology image analysis and mapping them to PSMA PET-CT images with good accuracy will help in the treatment planning. This regional analysis would help the deeper understanding of tumor heterogeneity.[3],[4],[5],[6],[7]

In the current scenario, for detecting prostate cancer and to assign its severity score, doctors look at whole-slide pathology images (WSIs) generated from biopsies taken at multiple places in the organ. Looking at a WSI is a very monotonous and tedious task for expert pathologists because of its huge size. If the biopsy sample is not taken from the correct region, then its tissue analysis might result as false negative. In this work, we have developed an AI-based model to give doctors clinically significant (aggressive) habitat regions and its Gleason grade from PSMA PET/CT imaging data, so that biopsy samples can be extracted from the correct region and reduce false negatives. Corresponding Gleason grade maps on the region can support doctors in making the right decision for biopsy sampling and therapy planning. The proposed model contains a classifier that can segregate aggressive and indolent for the given tumor region of interest (ROI). The major challenges addressed in this work are as follows:

  1. Tagging habitats with Gleason grade to categorize them as aggressive or indolent type
  2. Prostate biopsies incur the risk of being false negative and the patient characteristics, and this might affect the cancer detection rate in biopsy
  3. Multi-parametric magnetic resonance imaging (MRI) analysis is less sensitive when compared to PSMA PET-CT
  4. Standardized uptake values (SUV) vary across multiple vendors based on the detector type, physiological conditions, acquisition, and reconstruction parameter.



  Solution Approach Top


Gleason grades varying from 1 to 5 show the severity of prostate cancer, as shown in [Table 1]. A higher grade shows high severity. Gleason scores are generated by summing two Gleason grades. As generally, high-grade cancer is surrounded by region of low-grade cancer. Hence, Gleason scores vary majorly from 6 (3 + 3) to 10 (5 + 5) as 1 and 2 grades are very rarely or not assigned nowadays. We are trying to solve the problem by framing the Gleason grade grouping as a classification problem (indolent vs. aggressive) with parameters as handcrafted features from PSMA PET and CT images. We considered aggressive as Gleason score ≥7 and indolent as Gleason score ≤6, including normal cases, i.e., GS of 0.
Table 1: Gleason grade group, score, and pattern

Click here to view


International society of urological pathology prostate cancer grade groups

Major parts of the proposed solution are listed below:

  1. Normalize PSMA SUV values with respect to liver tissue for supporting multi-vendor machine variants, various detectors with different scintillating crystals materials such as LSO, BGO and LYSO, and physiological condition of patient results in the variation of SUV uptake
  2. Multi-modality fusion of PSMA PET and CT at the feature level
  3. Machine learning model to map Gleason scores
  4. Classify habitats into aggressive and indolent.


The detailed solution is explained in further sections.


  State of the Art Top


Automated Gleason grading is an active research problem for a long time, and a lot of work have been done on it using histopathological images and MRI images. In literature we found very few articles on Lesion grading using PSMA PET CT Data. Recently, in the paper,[5] the authors have worked on cancer versus noncancer classification using pelvis CT images. They have firstly enhanced the images using morphological operations to get a strong structure, high pass filter as frequency domain filter, and median filter as space domain filter. They have extracted radiomic features, particularly texture-based features (gray-level co-occurrence matrix [GLCM]) after preprocessing the images and trained random forest and logistic regression models. Finally, they compared the results and concluded LR performed better than RF and it is possible to classify using CT images with good accuracy of around 90%. The limitation there is they have generated a classifier for cancer versus noncancer but nothing for the severity of cancer.

In Zhong et al.'s study,[6] they have also classified aggressive versus indolent prostate cancer using MRI images. First, bounding box images are cropped based on the doctor's given ROIs. Then, they have used 2 pretrained (on CIFAR10 data), ResNet models to extract features from T2-weighted and ADC MRI images and concatenated output features from both the nets and peripheral zone information additionally fed to fully connected (FC) layer with softmax layer to get the output probability. They reported an accuracy of 72% for DTL-based model.

Similarly, in Yuan et al.'s study,[7] they tried to classify clinically high-grade cancer (Gleason score = 4 + 3, 4 + 4, 3 + 5, 5 + 3) and low-grade cancer (Gleason score = 3 + 3, 3 + 4) using different sequences of MRI images. Particularly, they have extracted features from T2 axial, T2 sagittal, and ADC MRI images using pretrained deep learning-based AlexNet architecture. At the last layer, they have concatenated the features from the three models and applied the FC layer at the end with the proposed custom loss function, which considers joint constraints of softmax loss and image similarity loss in the fine-tuning process. They have achieved an accuracy of around 86% for high-grade versus low-grade classification.

In a study,[1] Yoo et al. developed a convolutional neural network (CNN)-based pipeline for the classification of clinically significant versus nonsignificant (including normal cases) with the area under the curve of 84% using diffusion-weighted imaging MRI images. First, they have extracted ROI using automated center cropping instead of manual ROI annotations. Then, they have used them in the proposed three-stage pipelines. In the first stage, they classify and generate a score for each slice using five individually trained ResNet networks. Five vectors with the dimension of slice numbers are the output of the first stage. In the second stage, first-order statistical features are extracted using those feature vectors of CNN stage output. Then, feature selection is performed using a decision tree-based feature selector. Now, all the features set from the 5 CNNs are combined to get the final feature set. At the end, a random forest classifier is trained using those features to get the final patient-level output.

Although more work is present using MR imaging, recently, in a study,[8] author concluded that PSMA PET/CT provides better detection of prostate cancer lesions with higher sensitivity. They have done chart reviews for 50 patients who underwent radical prostatectomy to compare the performance of both the modalities.


  Dataset Description Top


A cohort of 94 patients obtained from HCG hospital who underwent biopsy were included. Each patient's data consist of CT volume, CT ROI mask, and PET volume. Obtained CT is of shape 335 × 512 × 512 and PET is of shape 335 × 192 × 192. Labels, i.e., Gleason score details, are obtained from the rtstruct file provided. For each lesion Region of interest (ROI) as shown in [Figure 1], Gleason scores of 6, 7, 8, 9, 10 was provided by looking at the biopsy report. [Table 2] shows the division of full data in different Gleason scores.
Figure 1: Sample data with figure (a) showing CT image, figure (b) showing corresponding PET image, and figure (c) showing given ROI mask, CT: Computed tomography, PET: Positron emission tomography, ROI: Region of interest

Click here to view
Table 2: Counts of different Gleason scores present in the dataset

Click here to view


As mentioned earlier, we have considered Gleason score >6 as aggressive or clinically significant cases and Gleason score ≤6 as indolent or clinically not significant cases.[1] Hence, actual indolent cases are 21 and aggressive cases are 73. This count is very imbalanced; hence to make it balanced, we have added additional 49 normal ROI cases also in the indolent class. Now, our indolent class consists of Gleason 6 and normal cases, i.e., 70 cases and 73 as aggressive cases. Out of these total 143 cases, we have selected 80% for cross-validation and the remaining 20% for a separate test set. [Table 3] describes the division of both the classes in train and test sets.
Table 3: Class-wise data split in train and test sets

Click here to view



  Technical Note Top


To add Gleason grading capabilities to PSMA PET-CT scan for tumor habitats, the process captured in [Figure 2] was followed.
Figure 2: Method flowchart

Click here to view


The overall process includes the following stages:

  1. Data preprocessing
  2. Habitat ROI mask generation
  3. Multimodal feature extraction
  4. Feature selection and fusion
  5. Machine learning model to classify indolent and aggressive types.



  Data Preprocessing Top


Mask generation and interpolation

First, all the CT ROI masks were generated from the given rtstruct files with proper orientation. Next, as the CT is of shape 512 × 512 and PET is of 192 × 192 and to use the same CT mask for both, we have to increase the resolution of PET to 512 × 512. We tried interpolation with the trilinear, nearest neighbor, and bspline methods. Trilinear was giving the best results.

Auto-normalization

While CT is having standardized Hounsfield unit scale, PET imaging does not have any standardized scale of values. It highly depends on the amount of nuclear tracer injected, scintillation material used in the detector, time of flight or nontime of flight, etc., and reconstruction parameters used in the scanner. Hence, close attention to details is required in PET scans to avoid errors. To avoid such variations and make the model independent of those possible variations, we proposed a novel normalization method. In this, we pick a particular normal organ in the body and normalize the full scan with the median of the selected organ. In our work, we normalized the scan with the median SUV value of the liver organ. In order to get liver organ SUV values, the liver segmentation mask generated on CT volume is extended to registered PET volume, as shown in [Figure 3].
Figure 3: PET SUV normalization using CT liver segmentation, CT: Computed tomography, PET: Positron emission tomography, SUV: Standardized uptake value

Click here to view


For liver segmentation, we have used an in-house developed liver segmentation algorithm based on deep learning CNN network. In this, Visual Geometry Group-16 (VGG-16), as shown in [Figure 4], is used as a base network by removing the last FC layers, so that the network consists of convolutional, ReLU activation, and pooling layers. The base network is pretrained on ImageNet. As the network goes deeper, the information is coarser and the learned features are more related to semantics. On the other hand, at the shallower feature maps that work at a higher resolution, filters capture more local information. To take advantage of the information learned at feature maps that work at different resolutions, the network uses several side outputs with supervision. A side output is a set of convolutional layers that are connected at the end of a specific convolutional stage from the base network. Each of these side outputs specializes on different types of features, depending on the resolution at the connection point. The feature maps produced by each side output are resized and linearly combined to generate a segmented liver mask.
Figure 4: VGG-16 architecture. VGG-16: Visual Geometry Group-16

Click here to view


Faster and robust model training is performed by using ImageNet pretrained VGG-16 model weights. As VGG-16 architecture takes a 3-channel image slice as input, for the scope of liver detection, a slice along with its previous and next slices are fed to the network for training using binary cross entropy. As a part of data preprocessing, CT volume data are clipped between 150 and 250 (liver and lesion soft tissues lie in this range) and mapped to 0–255

In-house developed liver segmentation is giving results of 0.96 dice score on the liver CT volumes that come along with PET-CT scanner. The mask generated for a patient is shown in [Figure 5].
Figure 5: Liver segmentation result

Click here to view


Clipping and bounding box extraction

The input CT and normalized PET are clipped to remove negative values. Moreover, the bounding box is extracted from the clipped image using the given ROIs from both the modalities. These bounding boxes are used further for feature extraction.


  Habitat Region of Interest Mask Generation Top


Habitat ROI mask can be generated either from the doctor or from the tool in an automated way. In the current scenario, tumor habitats marked by doctors are used for experiments. However, in future, we can use deep neural network-based segmentation architectures like UNet or using the dynamic threshold of PET scan.


  Multi Modal Feature Extraction Top


After preprocessing, radiomic features are extracted from ROI bounding box of both the modalities. Radiomic feature analysis is defined as extracting quantitative image features for the characterization of disease patterns.[3] The features extracted from ROI are size, shape, location and texture features. Texture features involves first order statistical features, features based on gray level co-occurrence matrix and features from gray level run length coding. The first-order features are based on statistical information of image majorly including maximum intensity, minimum intensity, average, standard deviation, variance, energy, entropy, sharpness, skewness, kurtosis, and grayscale, which give statistical information using frequency distribution of different gray levels in the given image.[4]


  Feature Fusion, Standardization, and Selection Top


In order to understand the importance of using both the modalities together, three independent experiments were performed: first, using features from the CT modality; second, using features from the PET modality; and third, using concatenated features from both the modalities. Concatenated features gave better results; the details of the results are captured in [Table 4]. All features obtained are Gaussian standardized to keep them in range to avoid the wrong significance of features with a higher magnitude of values.
Table 4: Selected features from computed tomography and positron emission tomography input data

Click here to view


After concatenation and standardization, not all radiomic features are used for model training. To reduce overfitting, complexity, and training time and to get stable and generalized model, feature selection is carried out before model training. For selecting important features, we have used the sequential feature selection (SFS) method. The workflow of the SFS method is shown in [Figure 6].
Figure 6: Sequential feature selection workflow[3]

Click here to view


Out of all SFS methods, we have used the sequential forward selection method. In this method, we start with an empty set of features and start adding feature one by one which performs best and improved results with already selected features. In each iteration, the feature which gives maximum improvement is included in the selected features. The pseudo code for the method is shown in [Figure 7]. The performance of the model during feature selection is shown in [Figure 8].
Figure 7: Sequential forward selection pseudo code[9]

Click here to view
Figure 8: Model accuracy in sequential forward selection method for 10 best features

Click here to view


We have selected the 10 most important features using the SFS method, including six from CT and four from PET data. [Table 4] shows the list of selected features from both the modalities.


  Machine Learning Model Selection and Hyperparameter Tuning Top


Different machine learning models were tried to classify indolent and aggressive prostate cancer, including logistic regression, support vector machine, and ensemble methods such as Random Forest, Xgboost, and balanced random forest with 5-fold cross-validation. Out of these, random forest was selected because of its understandability and best performance on cross-validation set. The grid search method was used to tune the hyperparameters of the random forest model. Tuned parameters are shown in [Table 5].
Table 5: Tuned hyperparameters for random forest model

Click here to view



  Conclusion Top


We proposed a method for minimizing the probability of a false-negative result by labeling the habitats within the gross aggressive tumor before the biopsy. The proposed solution aids in the identification and selection of aggressive tumor tissue during the biopsy.

We have developed a machine learning model using handcrafted multi-modal features from normalized PSMA PET-CT radiomic features to classify aggressive (Gleason score ≥7) and indolent (Gleason score ≤6) tissue from the given tumor ROI. We achieved an overall accuracy of 89% in 5-fold cross-validation and 90% in the test (Validation) set. Furthermore, we found that normalized fused PSMA PET-CT features perform better than the individual modalities. A novel method of normalization is also proposed for PET data to avoid SUV uptake variability due to multi-vendor, different types of scanners, different reconstruction parameters, and physiological parameters. As the current dataset is small, this model would need to be validated in much larger datasets to further evaluate its clinical utility.[9]

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Yoo S, Gujrathi I, Haider MA, Khalvati F. Prostate cancer detection using deep convolutional neural networks. Sci Rep 2019;9:19518.  Back to cited text no. 1
    
2.
Adams MC, Turkington TG, Wilson JM, Wong TZ. A systematic review of the factors affecting accuracy of SUV measurements. AJR Am J Roentgenol 2010;195:310-20.  Back to cited text no. 2
    
3.
Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RG, Granton P, et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012;48:441-6.  Back to cited text no. 3
    
4.
Yao S, Jiang H. Song B. Radiomics in prostate cancer: Basic concepts and current state-of-the-art. Chin J Acad Radiol 2020;2:47-55.  Back to cited text no. 4
    
5.
Uzair S, Sheikh Abdullah S, Omar K, Adam A, Syazarina S. Prostate cancer classification technique on pelvis CT images. Int J Eng Tech 2019;8:206-13.  Back to cited text no. 5
    
6.
Zhong X, Cao R, Shakeri S, Scalzo F, Lee Y, Enzmann DR, et al. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI. Abdom Radiol (NY) 2019;44:2030-9.  Back to cited text no. 6
    
7.
Yuan Y, Qin W, Buyyounouski M, Ibragimov B, Hancock S, Han B, et al. Prostate cancer classification with multiparametric MRI transfer learning model. Med Phys 2019;46:756-65.  Back to cited text no. 7
    
8.
Berger I, Annabattula C, Lewis J, Shetty DV, Kam J, Maclean F, et al. 68Ga-PSMA PET/CT versus mpMRI for locoregional prostate cancer staging: Correlation with final histopathology. Prostate Cancer Prostatic Dis 2018;21:204-11.  Back to cited text no. 8
    
9.
Ashley S, Mendoza-Schrock Olga, Scott K, Matthew D, Arnab S. An end-to-end vechicle classification pipeline using vibrometry data. Proceedings of SPIE - The International Society for Optical Engineering 2014;9079:90790O. 10.1117/12.2053087.  Back to cited text no. 9
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Solution Approach
State of the Art
Dataset Description
Technical Note
Data Preprocessing
Habitat Region o...
Multi Modal Feat...
Feature Fusion, ...
Machine Learning...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed140    
    Printed2    
    Emailed0    
    PDF Downloaded14    
    Comments [Add]    

Recommend this journal