Pancreatic cancer ranks fourteenth in global incidence among the 36 most common malignant neoplasms, but it climbs to seventh in mortality, according to the International Agency for Research on Cancer and GLOBOCAN 2020 data. In the United States alone, 60,430 new cases and 48,220 deaths were reported in 2021. Projections suggest pancreatic cancer could become the second leading cause of cancer-related death in the US by 2030. The disease is characterized by elusive early symptoms, high invasiveness, and a strong tendency toward postoperative metastasis and relapse.
Current diagnostic limitations: The standard diagnostic workup relies on imaging modalities such as contrast-enhanced CT, MRI, and PET-CT, supplemented by endoscopic ultrasonography (EUS) and EUS-guided fine needle biopsy for staging. While these tools are important, their diagnostic precision is influenced by technical variability. Radical surgery remains the primary curative approach, yet most patients are diagnosed at a stage when curative resection is no longer feasible.
AI as a transformative tool: Machine learning and deep learning, the two dominant AI paradigms, have been applied across the pancreatic cancer care continuum, from early screening and diagnosis to treatment planning and prognostic prediction. Deep learning, a subset of artificial neural networks, has shown particular strength in image processing through convolutional neural networks (CNNs). These models can identify characteristic features in medical images based on standardized decision-making protocols and analyze statistical relationships between those features and clinical outcomes.
This review from Liaoning Cancer Hospital and China Medical University surveys the current state of AI technology in pancreatic cancer, covering early screening, imaging-based diagnosis, surgical applications (including 3D visualization and augmented reality), postoperative complication prediction, and prognostic modeling. The authors aim to provide both a status report and a forward-looking perspective on future AI applications in this field.
IPMN malignancy prediction: Intraductal papillary mucinous neoplasm (IPMN) is a recognized precursor to pancreatic cancer. A deep learning study analyzed 3,970 endoscopic ultrasound (EUS) images from histopathologically confirmed IPMN patients. The model produced an "AI value" (a continuous variable from 0 to 1) and an "AI malignancy probability" (the mean AI value per patient). Malignant IPMNs showed significantly higher average AI values than benign ones (0.808 vs. 0.104, P < 0.01). The model achieved an AUC of 0.98, with sensitivity of 95.7%, specificity of 92.6%, and accuracy of 94.0%, far exceeding physicians' diagnostic accuracy of 56.0%.
Electronic health records and diabetes cohorts: Longitudinal electronic health records have been leveraged to build predictive models targeting high-risk subgroups. One study developed a prognostic model from newly diagnosed diabetes patients using age at diabetes onset, body mass index, and glycemic fluctuations as key inputs. Patients scoring 3 or above showed diagnostic sensitivity and specificity of 80% for pancreatic cancer (AUC = 0.87), with a 4.4-fold increase in pancreatic cancer incidence compared to other diabetic patients. A separate study in Taiwan used logistic regression and artificial neural networks to predict pancreatic cancer risk in type 2 diabetes patients, incorporating age, antidiabetic drug use, and comorbidities. The logistic regression model achieved an AUC of 0.727.
Urinary biomarker-based risk scoring: Blyuss et al. developed PancRisk, a novel risk scoring system based on three urinary biomarkers (LYVE1, REC1B, TFF1). The team measured these markers in 199 pancreatic ductal adenocarcinoma patients and 180 healthy individuals, then applied machine learning algorithms to build and compare models. The logistic regression model achieved an AUC of 0.94. When combined with the established tumor marker CA19-9, the model reached a diagnostic specificity and sensitivity of 96%.
CT-based deep learning models: A team at Zhejiang University developed a deep learning model trained on contrast-enhanced CT images from 319 patients. The model could provide pancreatic tumor diagnosis directly from original abdominal CT images without preprocessing. It achieved an AUC of 0.871 and an F1 score of 88.5%, with an average diagnostic accuracy of 82.7% across all tumor types. Notably, the model reached 100% accuracy for IPMN detection and 87.6% for pancreatic ductal adenocarcinoma.
CNN for plain CT scans: A Chinese study developed a convolutional neural network trained on 7,245 CT images from 222 pathologically confirmed pancreatic cancer patients and 190 normal controls. The model was evaluated on two classification tasks: binary (cancer vs. no cancer) and three-category (no cancer, body/tail tumor, head/neck tumor). The binary model achieved 95.47% accuracy on plain scan images with 91.58% sensitivity and 98.27% specificity. The three-category model was especially effective at diagnosing head/neck tumors using arterial phase images.
EUS-based AI for differential diagnosis: Marya et al. developed a CNN model based on EUS images to distinguish autoimmune pancreatitis (AIP) from pancreatic ductal adenocarcinoma, chronic pancreatitis, and normal pancreas. The model achieved high performance across all comparisons: AIP vs. adenocarcinoma (90% sensitivity, 93% specificity), AIP vs. normal pancreas (99% sensitivity, 98% specificity), and AIP vs. chronic pancreatitis (94% sensitivity, 71% specificity). A separate study introduced age-stratified grouping (under 40, 40 to 60, over 60 years) into the AI model, achieving classification accuracy, sensitivity, and specificity ranging from 88.5% to 94.1%, outperforming the ungrouped model.
Pathology and tumor markers: Unsupervised learning methods have also shown efficacy in identifying specific tumor markers linked to pancreatic cancer, offering a novel computational approach to screening clinically relevant biomarkers. Meanwhile, developing models that can precisely identify and autonomously segment pancreatic tumors remains a critical frontier in advancing diagnostic capabilities.
Evaluating neoadjuvant therapy with AI: Radical resection remains the cornerstone of curative treatment for pancreatic cancer. Neoadjuvant therapy has emerged as a promising approach to expand the pool of patients eligible for surgery. A study from the Netherlands used digital processing of HE-stained sections from 64 pancreatic cancer patients who received neoadjuvant chemotherapy, delineating three tissue categories (tumor, normal duct, and residual epithelium) to train a tumor segmentation model. The model achieved an average F1 score of 0.86. A separate US study used machine learning to compare enhanced CT images before and after neoadjuvant therapy, identifying treatment-related image features and building a prediction model with an AUC of 0.94.
Tumor tracking and radiotherapy planning: Researchers have leveraged deep neural networks to precisely locate and track pancreatic tumors without internal markers. Automated segmentation methods for organ-threatening contours have been developed to guide radiotherapy planning. Deep learning techniques have also been applied to stereotactic radiotherapy for pancreatic cancer, enabling accurate predictions of radiation dose distribution.
3D reconstruction and visualization: AI-driven three-dimensional reconstruction has demonstrated high accuracy, sensitivity, and specificity in assessing pancreatic cancer. In one study, the authors used 3D visualization technology to observe the location, size, and anatomical relationships of pancreatic head tumors prior to pancreaticoduodenectomy, effectively optimizing surgical plans and reducing surgical time, intraoperative blood loss, and postoperative recovery time. A Japanese group used 3D reconstruction preoperatively to determine the size and position of the main pancreatic duct and select the optimal anastomosis technique.
Augmented reality-assisted resection: Augmented reality (AR) navigation merges three-dimensional virtual images with real-time intraoperative conditions, providing surgeons with enhanced spatial awareness. Okamoto et al. evaluated five patients who underwent AR-assisted pancreatic resection surgery and found strong agreement between the positions of various organs in surface-stained images and their actual intraoperative positions. This early feasibility study demonstrated the viability of overlaying preoperative 3D models onto the surgical field.
Laparoscopic applications: Volonte et al. applied AR navigation technology to laparoscopic distal pancreatectomy, projecting nodules in the tail of the pancreas onto the patient's body. This projection enhanced the surgeon's anatomical understanding and improved localization during the minimally invasive procedure. The approach represents a step toward integrating real-time digital overlays with standard laparoscopic techniques.
Smartphone-based AR: In a notable innovation, Tang et al. used augmented reality software on smartphones to overlay reconstructed 3D images onto the surgical area displayed on the phone screen. This provided intermittent navigation assistance during pancreaticoduodenectomy, helping identify the boundaries of pancreatic head cancer invasion and facilitating removal of relevant blood vessels. All surgical patients in the study achieved R0 resection (complete removal with negative margins), underscoring the clinical value of even simple, accessible AR tools in complex operations.
Pancreatic fistula risk prediction: Postoperative pancreatic fistula is the most common and potentially life-threatening complication of pancreatic cancer surgery, capable of triggering abdominal infection and bleeding. The existing risk scoring system considers only four factors, which is a significant limitation. A Korean team built an AI-driven prediction platform using random forest and neural network algorithms, analyzing 38 variables from 1,769 patients who underwent pancreaticoduodenectomy between 2007 and 2016. By combining neural networks with recursive feature elimination, the platform achieved a maximum AUC of 0.74 and identified 16 risk factors, including pancreatic duct diameter, body mass index, preoperative serum albumin, lipase level, and age.
MRI radiomics for fistula prediction: Skawran et al. applied a gradient boosting tree model based on MRI radiomics to predict postoperative pancreatic fistula after pancreaticoduodenectomy and achieved an AUC of 0.90. This represents a substantial improvement over the neural network-based approach, likely owing to the rich quantitative features extractable from MRI data.
ICU admission and bleeding prediction: Zhang et al. developed a support vector machine model for predicting postoperative ICU admission, achieving an AUC of 0.8. A separate study used AI to analyze clinical features of pancreatic ductal adenocarcinoma patients and identified bilirubin, CA19-9, and preoperative albumin as factors associated with postoperative bleeding. These models collectively point toward a future where individualized risk stratification informs surgical decision-making and resource allocation.
Multicenter recurrence prediction: Lee et al. used multicenter registry data to evaluate postoperative recurrence probability and identify major prognostic factors in pancreatic cancer. They employed random forest and Cox proportional hazards models on a cohort of 4,846 patients. Tumor size, tumor grade, TNM stage, T stage, and lymphovascular invasion emerged as the key prognostic factors for disease-free survival based on variable importance rankings. The Cox model outperformed the random forest model, achieving a higher mean C-index (0.7738 vs. 0.6805), indicating superior predictive ability for this task.
Neural networks for unresectable cancer survival: Tong et al. studied 221 patients with unresectable pancreatic cancer and collected 32 clinical parameters. They developed three artificial neural network models using different feature subsets (3, 7, and 32 features) to predict 8-month survival. All three neural network models outperformed the corresponding logistic regression models in AUC values: 0.811 vs. 0.680, 0.844 vs. 0.722, and 0.921 vs. 0.849 (all P < 0.05). The full 32-feature model reached the highest AUC of 0.921, demonstrating that artificial neural networks can effectively leverage complex, multi-dimensional clinical data.
These findings collectively suggest that AI-based prognostic tools, whether using traditional Cox models for recurrence or neural networks for survival in unresectable disease, can provide clinicians with more granular and accurate risk assessments than conventional staging systems alone. The diversity of approaches also indicates flexibility in model selection based on the clinical question at hand.
Interpretability: Deep learning models used in pancreatic cancer screening, diagnosis, surgery, and prognosis frequently lack transparency. Clinicians cannot easily understand how a model arrives at a particular prediction, which breeds skepticism and limits adoption. The authors emphasize that dedicated research into interpretability and explainable AI is essential to create a more transparent decision-making process in clinical settings.
Generalization across datasets: Models developed on single-center datasets often show degraded performance when applied at other institutions. This is a well-documented issue across medical AI, but it is particularly acute for pancreatic cancer because of variations in imaging protocols, patient demographics, and clinical workflows. Ensuring consistent performance across diverse clinical environments requires multi-center validation studies and standardized data pipelines.
Small sample sizes: Pancreatic cancer, while deadly, is relatively rare compared to other malignancies. This limited sample size creates obstacles for effective model training and validation, often producing erratic performance. The authors suggest exploring cross-center collaboration and synthetic sample generation (data augmentation) as strategies to expand training datasets and improve model reliability.
Ethical and regulatory considerations: The use of AI in diagnosis and treatment raises concerns about patient confidentiality and data integrity. Establishing ethical benchmarks, standards for data dissemination, and mechanisms to protect patient rights and privacy is essential. Future work should also focus on integrating multi-omics data analysis (genomics, proteomics, metabolomics) to develop personalized treatment regimens tailored to individual patients. Collaborative efforts between clinicians and AI researchers will be critical in translating these technologies from research into routine clinical practice.