PURPOSE: We present a deep learning system for creating “virtual-PET” images using CT images. The system is limited to the liver region and aims to predict FDG-avid liver metastases.
MATERIAL AND METHODS: This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 1918/16). The system blends two deep learning algorithms: Fully Convolutional Neural Networks (FCN) and conditional Generative Adversarial Networks (cGANs) and has three stages: FCN and cGANs networks training (using PET/CT studies), networks testing (using separate PET/CTs set) and networks blending to achieve final "virtual-PET" images. FCN produce same sized outputs from image inputs; in this study each CT voxel Hounsfield-unit value predicted a corresponding virtual-PET pixel’s standardized uptake value (SUV). cGANs are trained to generate “sketches from images”; in this study, "virtual-PET" images were generated from CT images after CT/PET training. A radiologist compared generated "virtual-PET" images to original PET images. Two measurements were computed: (1) true positive rate (TPR, number of correctly detected metastases/total number of metastases) and (2) false positive rate (FPR, number of false positives per scan).
RESULTS: Our dataset included 25 PET/CT studies: 17 were used for training and 8 (with 26 metastases) for testing. cGANs provided more realistic looking "virtual-PET" images, but FCN had better response to metastases. Blended images showed best performance, with TPR of 92.3% (24/26 metastases) and FPR of 0.25 (2 false positives/8 studies).
CONCLUSION: These preliminary data suggest that deep learning algorithms can create "virtual-PET" images. If proven in larger patient cohorts, this technique may enhance CT only studies and improve radiologists’ performance.