Journal of the American Society of Cytopathology (2024) 13, 97e110 Available online at www.sciencedirect.com ScienceDirect journal homepage: www.jascyto.org/ Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force David Kim, MDa,1, Kaitlin E. Sundling, MD, PhDb,1, Renu Virk, MDc,1, Michael J. Thrall, MDd,1, Susan Alperstein, MS, CT (ASCP)e, Marilyn M. Bui, MD, PhDf, Heather Chen-Yost, MDg, Amber D. Donnelly, PhD, MPH, SCT (ASCP)h, Oscar Lin, MD, PhDa, Xiaoying Liu, MDi, Emilio Madrigal, DOj, Pamela Michelow, MBBch, MIACk,l, Fernando C. Schmitt, MD, PhD, FIACm, Philippe R. Vielh, MD, PhD, FIACn, Maureen F. Zakowski, MDo, Anil V. Parwani, MD, PhDp, Elizabeth Jenkins, MSq, Momin T. Siddiqui, MD, FIACe, Liron Pantanowitz, MD, PhD, MHAr,**, Zaibo Li, MD, PhDp,* aDepartment of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York b The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin cDepartment of Pathology and Cell Biology, Columbia University, New York, New York dDepartment of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas eDepartment of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York f The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida *Corresponding author: Zaibo Li, MD, PhD; Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, OH; Tel.: (614) 366-4859; Fax: (614) 293-4715. **Liron Pantanowitz, MD, PhD, MHA; Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA; Tel.: (413) 237-4397; Fax: (724) 786-7696. E-mail addresses: pantanowitzl2@upmc.edu (L. Pantanowitz), Zaibo.Li@osumc.edu (Z. Li). 1 These authors contributed equally. 2213-2945/$36 � 2023 American Society of Cytopathology. Published by Elsevier Inc. All rights reserved. https://doi.org/10.1016/j.jasc.2023.11.005 mailto:pantanowitzl2@upmc.edu mailto:Zaibo.Li@osumc.edu https://doi.org/10.1016/j.jasc.2023.11.005 www.sciencedirect.com/science/journal/22132945 http://www.jascyto.org/ https://doi.org/10.1016/j.jasc.2023.11.005 https://doi.org/10.1016/j.jasc.2023.11.005 98 D. Kim et al. gDepartment of Pathology, Michigan Medicine, Ann Arbor, Michigan hDiagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska iDepartment of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire jDepartment of Pathology, Massachusetts General Hospital, Boston, Massachusetts kDivision of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa lDepartment of Pathology, National Health Laboratory Services, Johannesburg, South Africa mDepartment of Pathology, Medical Faculty of Porto University, Porto, Portugal nDepartment of Pathology, Medipath and American Hospital of Paris, Paris, France oDepartment of Pathology, Mount Sinai Medical Center, New York, New York pDepartment of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio qAmerican Society of Cytopathology, Wilmington, Delaware rDepartment of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania Received 6 November 2023; received in revised form 28 November 2023; accepted 29 November 2023 KEYWORDS Cytopathology; Digital cytology; Artificial intelligence; Whole slide image; Recommendation; Validation Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. How- ever, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology depart- ments were also assessed. The task force reviewed existing literature on digital cytology, conducted a world- wide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into prac- tice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported. � 2023 American Society of Cytopathology. Published by Elsevier Inc. All rights reserved. Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99 Artificial intelligence applications in GYN cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99 Artificial intelligence systems using glass slides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99 Artificial intelligence systems using digital whole slide images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 CytoProcessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 BestCyte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Hologic Genius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Landing Med . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 KFBIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 CytoSiA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 Techcyte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 AI in non-GYN cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104 AI for urine cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104 AI for effusion cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104 AI for fine needle aspiration (FNA) cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104 AI for rapid on-site evaluation (ROSE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Survey of current practice in cytology AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Evaluating and validating AI for cytology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 Artificial intelligence in cytology 99 Reporting recommendation and legal considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 Conclusion and future direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 Funding sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 Conflict of interest disclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 CRediT authorship contribution statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 Introduction A major application of digital cytology that has received interest in recent years has been the use of artificial intelli- gence (AI) algorithms in clinical practice. AI refers to a broad term that encompasses any machine or deep learning algorithm that emulates human intelligence and decision- making. Multiple different AI algorithms have been devel- oped in medicine that assist in a wide variety of narrow tasks from workflow optimization to diagnostics and prog- nostication. Cytology is no stranger to AI integration for clinical practice, as some of the first AI algorithms were developed to assist with cervical cytology screening in the late 1990s. The Hologic ThinPrep Imager and FocalPoint GS Imaging System, which have been in use since the late 1990s and early 2000s, are widely used by many cytology departments for cervical screening. Both systems represent some of the first examples of AI application in pathology. There has been even greater interest recently due to the emergence of deep learning algorithms that can extract morphologic features from whole slide images (WSI). As a result, there have been many studies detailing AI algorithms applied to cytology demonstrating successful cytomorpho- logical analysis for diagnostic use and research discovery. To date, most cytology AI papers have focused on Pap tests (GYN cytology). However, there are also AI-related studies investigating the application of this technology for urine, thyroid, pancreatobiliary, lung, breast, and pleural effusion specimens. In this part, we aimed to comprehensively re- view AI applications and current AI practice in cytology, provide recommendations for evaluating cytology AI ap- plications, reporting AI aided cytology results and legal considerations. Artificial intelligence applications in GYN cytology Artificial intelligence systems using glass slides AI has been applied in GYN cytology practice for a rela- tively long time. PAPNET was the first commercially available AI-based screening system which was approved by the United States Food and Drug Administration (FDA) in 1995. PAPNET was designed to identify abnormal cells in Pap tests that were already screened and interpreted as negative by a cytologist. The intent was to reduce the false- negative rate as an added quality control measure. PAPNET selected 128 potentially abnormal cells or clusters for re- view by a cytologist using a monitor. If any of these find- ings were concerning to the cytologist, a full manual review was subsequently necessary. Several studies have demon- strated that PAPNET outperformed rapid manual rescreen- ing, a method of secondary manual review in which a cytologist quickly reviewed the slide at low power, as well as full rescreening.1-3 However, additional studies revealed that PAPNET significantly increased the cost of cervical screening, while only yielding relatively small gains in sensitivity. AutoPap 300QC (Neopath/Tripath) was approved by the FDA in 1998. This system was acquired by BD Diagnostics in 2006 and renamed the BD FocalPoint Slide Profiler. The system can be used to identify conventional or SurePath smears as normal without further review; the remaining Pap tests are divided into 5 categories with abnormal risks. The BD FocalPoint GS Imaging System was developed from the original AutoPap/FocalPoint system by adding a robotic microscope, which enabled slides to be dotted in advance of manual screening, thereby allowing cytologists to more quickly and reliably see the cells identified as the most concerning by the automated system.4-6 The BD FocalPoint GS Imaging System was approved by the FDA in November of 2008. The BD FocalPoint Slide Profiler is a classification screening system with no manual review required for negative slides; however, it is no longer commercially available. Cytyc developed an interactive system with computer pre-screening to select abnormal cells on ThinPrep Pap test slides for further examination by cy- tologists. In 2003, this imaging system was approved by the Food and Drug Administration (FDA), was acquired by Hologic in 2007 and became known as the ThinPrep Im- aging System. Both the BD FocalPoint GS and ThinPrep Imaging System are interactive screening systems that rely on a location guided process combined with required manual review. The ThinPrep Imaging System and BD FocalPoint GS Imaging System are compared in Table 1. A clinical trial comparing the ThinPrep Imaging System to manual screening alone showed a statistically significant improvement in sensitivity for ASC-US þ slides, equivalent sensitivity for LSILþ and HSIL þ slides, and statistically significant improvement in specificity for HSILþ.7 The adjudicated results for ASC-US þ showed a 39% reduction Table 1 Comparison between the FocalPoint GS and ThinPrep imaging system. System FocalPoint GS ThinPrep imaging system Method N/C ratio with reference to example images Imaging algorithm considers cellular features and nuclear darkness Stain Pap stain ThinPrep stain Samples SurePath ThinPrep FOV 10 FOVs 22 FOVs Image storage 10 days Permanent Slides/hour 170 slides/8 hours 200 slides/8 hours FDA clearance 2008 approved for screening 2003 approved for screening Abbreviations: FDA, Food and Drug Administration; FOV, field of view; N/C, nuclear/cytoplasmic. 100 D. Kim et al. in false-negative slides using the ThinPrep Imaging System. The workload limit for the ThinPrep Imaging System has been established at 200 slides in no less than an 8-hour workday utilizing only a 22 field of view (FOV) triage as the slide with FOV only review counts as 0.5 slide. How- ever, the slide with full manual review (FMR) using the Autoscan feature counts as 1 slide, and slide with both FOV and FMR counts as 1.5 slides. Any combination of FOV- only and FMR must be integrated to not exceed workload limits. Since most daily routine cases do not need FMR, cytologist screening rates are usually faster using ThinPrep Imaging System than using the manual review method. FDA approval of the BD FocalPoint GS System was based on data generated in a clinical trial comprised of 12,732 SurePath slides across 4 different sites in the United States, with adjudication performed at a fifth site.8 In the trial, the results of manual screening were directly compared to screening using the BD FocalPoint GS System. Data from the clinical trial showed an increase of 19.6% in sensitivity for the precise diagnosis of HSILþ (with a decline in specificity of 2.6%), an increase of 9.8% in sensitivity for the precise diagnosis of LSILþ (with a decline in specificity of 1.9%), and a decrease of 1.5% for the precise diagnosis of ASC-USþ (with an increase in specificity of 1.8%). Several follow-up studies have reported that this same system per- forms at least as well as manual screening in routine clinical use.9-11 The MAVARIC trial from England was conducted to compare both automated systems (ThinPrep Imaging Sys- tem and BD FocalPoint GS Imaging System) with manual screening. The trial confirmed that automated systems could indeed increase productivity, but was not conclusive about increased sensitivity.12,13 Trials from Australia and Scotland also confirmed increased productivity without increased sensitivity.14,15 The results from these trials support the premise that implementation of an automated cervical screening system could increase productivity, but probably did not improve overall screening quality or cost- effectiveness. In summary, these computer-assisted Pap test screening systems still use glass slides and a light mi- croscope, although FOV selections and guided localization are provided by machine learning software coupled with a motorized microscope. Artificial intelligence systems using digital whole slide images Digital imaging in pathology has progressed from static and live video imaging to WSI. WSI success in surgical pa- thology is exemplified by the FDA approval of this tech- nology for primary diagnosis, as well as the development of guidelines for standardizing clinical validation.16 Although WSI in cytopathology has not been as pervasive as in sur- gical pathology,17-19 WSI-based GYN cytology screening AI algorithms have been developed to primarily analyze liquid-based cytology (LBC) samples.20-39 The reported accuracy of these AI algorithms is variable. One study has demonstrated that a model using a convolutional neural network (CNN) was able to classify Pap test images into benign or malignant with an accuracy of 98.3% and a specificity of 98.3%.40 However, when using a CNN model to distinguish the 5 different categories in The Bethesda System for reporting cervical cytology on Pap WSIs, an overall average accuracy of only 60% was reported.41 Of note, a cloud-based WSI platform with a deep- learning classifier for p16/Ki-67 dual-stained (DS) cytology slides was recently developed. This AI-based tool had equal sensitivity and substantially higher specificity compared with both Pap (P < 0.001) and manual DS (P < 0.001), respectively. Compared with just the Pap test, AI- based DS reduced referral to colposcopy by one-third (41.9% versus 60.1%, P < 0.001).42 Several WSI-based AI platforms for screening Pap tests are commercially available and summarized in Table 2 and representative images are shown in Fig. 1. CytoProcessor. CytoProcessor� (DATEXIM,Caen,France) was designed to incorporate multiple AI algorithms opti- mized for cervical cytology WSIs scanned using 3DHIS- TECH Panoramic scanners from any type of LBC preparation and staining protocol.20,21 However, the slides are not scanned with z-stacking or volumetric scanning. CytoProcessor platform displays abnormal cervical cells in gallery format for review. One study has demonstrated that the sensitivity of detection of ASCUS and LSILþ was significantly higher using CytoProcessor� (1.5% false negative rate) than using the ThinPrep Imaging System (4% false negative rate).21 The CytoProcessor� workflow Table 2 Commercially available WSI-based Pap test screening AI platforms. Product CytoProcessor BestCyte Hologic Genius Landing Med CytoSIA Techcyte KFBIO AI-assisted system Vendor DATEXIM CellSolutions Hologic Landing Med OptraSCAN Techcyte KFBIO Slide preparation ThinPrep, SurePath BestPrep, other LBC ThinPrepc LBC ThinPrep, SurePath ThinPrep, SurePath LBC Scanner 3DHISTECH 3DHISTECH Hologic Genius LD Patho 320A/340A, LD Cyto2200 OptraSCAN-ULTRA/OS-LITE Nanozoomer, Grundium, Motic, 3DHISTECH KFBIO Magnification 40x 20xa Up to 40x 20x, 40x 40x, 20x 40x 20x Z-stacking No Yes Volumetric scan Yesf No Yes Yes Image Gallery view Yes Yesb Yesd Yes Yes Yes Yes Certified/approved CE Marked CE Marked CE IVDR Marked CE-IVD NMPA None ISO 13485, SOC2, CE IVDD NMPA Indication Screening Screening, Reporting, QC Screeninge Screening Screening Screening, QC Screening Abbreviations: CE, Conformité Européene; LBC, liquid based cytology; ISO, the International Organization for Standardization; IVD/IVDR, In vitro diagnostics/in vitro diagnostic regulation; NMPA, National Medical Products Administration (China); QC, Quality control. aThe BestCyte platform uses the 3DHISTECH scanner to scan slides using 20x. The images displayed in galleries are simulated 40x. The platform offers 4 optional magnifications for WSI viewing: 5x, 10x, 20x, & 40x. bThe BestCyte platform displays images in galleries with variable image sizes, not as equidistant images in lattices. cIncludes ThinPrep Pap test slides, ThinPrep nongynecological slides, and ThinPrep UroCyte slides. dGenius Cervical AI provides a gallery image view. eWhen used with Genius Cervical AI, the intent is to assist in cervical cancer screening. The Genius Digital Diagnostics System can also be used with ThinPrep� non-gynecological microscope slides and ThinPrep� UroCyte� microscope slides to provide a digital image of the whole cell spot for screening. fZ-stack functionality for the listed Landing Med scanners will be available by the end of 2023. Artificial intelligence in cytology 101 Figure 1 Representative images from several whole slide image (WSI) based artificial intelligence (AI) platforms for screening Pap tests. (A, B) Hologic Genius AI platform and WSI scanner; (C, D) KFBIO AI-assisted system and WSI scanner; (E, F) OptraSCAN CytoSIA platform with abnormal cells (E) and microorganism (F); (G) Landing Med AI platform and its WSI scanner; (H) Techcyte AI platform. 102 D. Kim et al. appears to be 1.6 times faster in terms of worker time compared to manually using a microscope.20 BestCyte. The BestCyte� cell sorter imaging system from CellSolutions (Greensboro, NC) is a WSI-based cytology screening system. The BestCyte� cell sorter imaging sys- tem displays abnormal cervical cells in gallery format on a monitor screen to facilitate cytologist interpretation. One study has demonstrated that the BestPrep liquid-based thin- layer Pap test and BestCyte cell sorter imaging system is equivalent to ThinPrep for manual review.43 Additional studies also demonstrated that the BestCyte enables shorter ThinPrep review times with optimal intraobserver concor- dance, facilitates improved specificity and reduced labor.44,45 Hologic Genius. The Genius Digital Diagnostics System (Hologic, Marlborough, MA) is a digital cytology platform Artificial intelligence in cytology 103 that combines an advanced AI algorithm with volumetric imaging technology of ThinPrep Pap test slides to identify abnormal (squamous and glandular) and normal (endocer- vical component) cells, as well as certain infections (Candida, Trichomonas vaginalis, herpes).46 Volumetric imaging is a novel scanning method in which a single scan simultaneously acquires up to 14 focal planes, eliminating the need to scan each layer individually. After the volu- metric scan, image processing merges in-focus pixels from multiple planes into a single layer, which thereby reduces slide scan time. The Genius system analyzes all the cells in a ThinPrep Pap test digital image but presents only (up to 30) static images in a gallery display of the most diagnostically relevant images, known as objects of interest (OOIs). The gallery is comprised of 6 image tiles of OOIs shown in 5 categories (low-grade cells, high-grade cells, clusters, glandular cells, potential microorganisms). The entire sys- tem includes the Digital Imager (scanner), AI algorithm, Image Management Server, and Review Stations. Cytolo- gists only need to examine the AI-selected representative image tiles (snapshots) of cells in the gallery, instead of screening through thousands of cells. If necessary, the cytologist will still be given the option to review the WSI of the cell spot on the right portion of the display. One recent study examined the Genius Digital Di- agnostics System’s performance in 555 Pap test cases with a digital scan (volumetric scan) at 14 levels relative to the ThinPrep Imaging System.46 In 86.56% of cases, a complete match between both systems was observed using the same 5 cytology categories. When also considering the cervical histology results, the match was 90.37%. In addition, when a cytology follow-up and/or a retrospective review was applied, the match reached 97.34%. Significantly, more cases of higher severity, atypical squamous cells cannot exclude high grade (ASC-H) and HSIL, were identified with the Genius Digital Diagnostics System, and its negative predictive value was higher. The screening time was significantly shorter with the Genius Digital Diagnostics System. With the Genius system for digital cytology, the sensitivity for ASC-H/HSILþ and the specificity for LSIL and HSIL were superior to liquid-based cytology and computer assistance. Landing Med. The Landing Med� (Wuhan, Hubei, China) AI platform is a supervised deep learning algorithm trained on 188,542 digital LBC cytological images. AI- assisted reading detected 92.6% of CIN 2% and 96.1% of CIN 3þ, significantly higher than or similar to manual reading of Pap test slides. AI-assisted reading had an equivalent sensitivity (relative sensitivity 1.01, 95% CI, 0.97-1.05) and higher specificity (relative specificity 1.26, 1.20-1.32) compared to skilled cytologists; whereas it demonstrated higher sensitivity (1.12, 1.05-1.20) and spec- ificity (1.36, 1.25-1.48) compared to cytopathologists.22 An additional cohort study demonstrated an overall agreement rate between AI and manual Pap test reading of 94.7% (95% confidential interval [CI], 94.5%-94.8%), and a kappa that was 0.92 (0.91-0.92). The detection rates of CIN2þ increased with the severity of cytology abnormality per- formed by both AI and manual reading (Ptrend < 0.001). General estimated equations, used to apply repeated mea- sures to different groups, showed that the detection of CIN2þ among women with ASC-H or HSIL by AI were significantly higher than corresponding groups classified by cytologists (for ASC-H: odds ratio [OR] Z 1.22, 95% CI 1.11-1.34, P < 0.001; for HSIL: OR Z 1.41, 1.28-1.55, P < 0.001). AI-assisted cytology was 5.8% (3.0%-8.6%) more sensitive for the detection of CIN2þ than manual reading with a slight reduction in specificity.34 KFBIO. KFBIO (Konfoong Biotech International Co., Ltd., Zhejiang, China) is a digital pathology platform primarily providing AI solutions for both GYN and non-GYN sam- ples. The platform is a complete system that includes slide preparation, scanning, laboratory information system inte- gration, AI algorithm, and viewer. Metrics for KFBIO based on information from the company include a sensitivity �98.2% and specificity �63.5% for detecting GYN lesions (both squamous and glandular lesions) with high accuracy which makes it ideal for screening. The algorithm was trained with 43,000 positive Pap test samples that were annotated by trained pathologists. After scanning, the AI algorithm detects and presents selected images of concern that can be reviewed in a gallery view. While there currently are no peer-reviewed publications for the KFBIO platform detailing its performance, per the manufacturer’s provided materials, it has the potential to increase efficiency and decrease the number of samples requiring pathologist review.47 CytoSiA. OptraSCAN’s CytoSiA (OptraSCAN, Pune, Maharashtra, India) is a digital cytology platform employed to screen LBC Pap test slides to differentiate between normal and abnormal cervical cells. In addition, the platform can identify reactive squamous cells, endometrial cells, and various infections including Actinomyces, Candida, clue cells, Trichomonas vaginalis, and herpes. CytoSiA uses a volumetric scan to provide a dual mode for scanning and viewing cytology smears and utilizes Z-stacking and an extended depth focusing algorithm. For Z-stacking scanned multiplane images, the viewing software enables the user to navigate, as well as zoom up and down to different planes to detect three-dimensional (3D) regions in focus. OptraS- CAN’s extended depth of field algorithm generates a single, entirely focused composite image with a relatively smaller file size than multiplane images for faster acquisition and image retrieval.48 Currently, there are no peer-reviewed publications using the CytoSiA platform to evaluate its performance. Techcyte. The Techcyte (Orem, UT, USA) Cervical Cytology System is a digital cytology screening tool that 104 D. Kim et al. helps cytologists and pathologists read Hologic ThinPrep or BD SurePath slides. The end user is presented with the most diagnostically relevant cells and/or microorganisms using an AI-based system. The Techcyte platform accepts any good quality 40x image from a compatible scanner. Their AI al- gorithm uses a CNN to identify differentiating features and determine which combinations indicate diagnostically sig- nificant objects. The AI system then places these objects into the most likely classification and displays them in order of atypia for review. The whole process takes several mi- nutes. A cytologist or pathologist can log into Techcyte on any web-enabled device and review available scans, deter- mining specimen adequacy and evaluating the AI-based results for diagnostic relevancy. They can also add notes and request a secondary consult.49 Currently, there are no peer-reviewed publications using Techcyte to evaluate its performance with Pap test screening. AI in non-GYN cytology With the promising results achieved to date in GYN cytology, AI applications have been gradually expanded to non-GYN cytology including urine, effusion cytology, and fine needle aspiration (FNA) cytology. However, at this time, almost all non-GYN AI algorithms are still at a research stage and their integration into routine clinical workflow has not been fully evaluated. AI for urine cytology AI algorithms have been developed to evaluate urine cytology cases with promising preliminary results.50-54 Some of these AI algorithms were trained to classify urine cytology cases into benign, low-grade, and high-grade urothelial carcinomas while others were developed to clas- sify individual cell types such as benign, atypical, and ma- lignant urothelial cells, squamous cells, etc. The performance of these AI algorithms has been high (AUCs up to 0.989, F1 scores up to 0.900). However, most of the datasets utilized in these studies were limited, without quantitative data analyses performed at the whole-slide level.50-54 One recent study developed a deep learningebased instance segmentation computational model primarily by using the N:C ratio (approximately 0.5-0.7 for low-risk cells and>0.7 for high-risk cells) in accordance with The Paris System (TPS) 2.0.55 This particular AI al- gorithm demonstrated performance results that were com- parable to human expert panel consensus (sensitivity 79.5% and 82.1% versus 92.3%, respectively; specificity 100% and 98.9% versus 100%, respectively). Furthermore, the per- formance of AI-assisted analyses of WSIs compared with the microscopic diagnosis by a cytopathologist demon- strated superior sensitivity (92.3% versus 87.2%) and negative predictive value (96.8% versus 94.8%).55 The VISIOCYT1 trial comprised an initial phase to develop and evaluate VisioCyt� test (VitaDX International, Rennes, France), a deep learning-based automated image processing tool to analyze urothelial cell morphology,56 and a second to validate the clinical performance of the Vis- ioCyt� diagnostic test. The first phase included a total of 598 patients (219 high-grade, 230 low-grade urothelial carcinomas and 149 negative controls) and the results demonstrated that the overall sensitivity was highly improved by the VisioCyt test compared to cytology (84.9% versus 43%) with 92.6% versus 61.1% for high-grade and 77% versus 26.3% for low-grade cases.56 The second phase included 391 participants and the results demonstrated VisioCyt�’s sensitivity was 80.9% (95% CI 73.9%-86.4%) and specificity was 61.8% (95% CI 53.4%-69.5%). In high- grade tumors, the sensitivity was 93.7% (95% CI 86.0%- 97.3%) and in low-grade tumors, the sensitivity was 66.7% (95% CI 55.2%-76.5%).57 AI for effusion cytology In one study, an artificial neural network (ANN) model was developed using 114 image patches of pleural fluids to classify benign and metastatic carcinoma cases. The results demonstrated excellent performance (AUC: 1.00) without misclassifying any single case.58 In another study, a deep convolutional neural network model (DCNN) (Inception- ResNet-V2) was trained and validated using 569 effusion slides to classify malignant pleural effusion of metastatic breast cancer and the results showed the DCNN model outperformed the pathologists by demonstrating higher ac- curacy, sensitivity, and specificity compared to the pathol- ogists (81.1% versus 68.7%, 95.0% versus 72.5%, and 98.6% versus 88.9%, respectively).59 A weakly supervised deep learning method was also reported to classify lung carcinoma cells in 404 effusion cytology cases and demonstrated an accuracy, sensitivity, and specificity respectively of 91.67%, 87.50% and 94.44% and an AUC of 0.9526.60 A deep learning model was also built to differ- entiate mesothelioma from normal mesothelial cells in effusion cytology specimens and this model achieved a sensitivity and specificity of 100%, respectively.61 Recently, the Deepcell platform (Deepcell, Inc.) that combines microfluidic sorting, brightfield imaging, and real-time deep learning interpretations based on multidimensional morphology to enrich carcinoma cells from malignant ef- fusions without cell staining or labels.62 AI for fine needle aspiration (FNA) cytology AI technology has been explored in classifying FNA cytology specimens from thyroid, lung, breast, pancreas, and other anatomic sites. For thyroid FNAs, most studies focused on identifying/ differentiating papillary thyroid carcinoma,63-66 while a few studies tried to differentiate follicular adenoma from follic- ular carcinoma.67 Cytology samples were prepared using either LBC or smears. The performance of AI algorithms was variable in these studies with the best accuracy reaching up to 100%.67 However, the datasets used in these studies Artificial intelligence in cytology 105 were limited. In a recent study with a large dataset (a training set of 964 and a test set of 601 FNAs), a deep- learning algorithm was developed to definitively classify 45.1% (130/288) of the WSIs as either benign or malignant with risk of malignancy rates of 2.7% and 94.7%, respec- tively. It also reduced the number of indeterminate cases by reclassifying 21.3% as benign.68 A deep learning-based ensemble model termed FNA-Net was developed to screen unstained thyroid FNA smears for adequacy and the results showed FNA-Net achieved a 0.81 F1 score and 0.84 AUC in the precision-recall curve for detecting the nondiagnostic slides.69 For lung cytology, AI models have been developed to identify and/or classify lung carcinomas into different morphologic types (adenocarcinoma, squamous cell carci- noma, etc.) based on image patches.70-72 In another study, a deep learning algorithm was trained on WSIs (including direct smears and H&E stained cell block sections) from 20 cases to differentiate small cell carcinoma from large cell neuroendocrine carcinoma.73 These results demonstrated that the algorithm showed high precision in distinguishing between these 2 categories on H&E sectioned material and direct smears.73 To date, AI algorithms were also shown in a few studies to classify breast or pancreatic FNA cytology speci- mens.74-76 In the pancreatic cytology study, a multilayer perceptron neural network (MNN) was trained on images from ThinPrep slides prepared from pancreatic FNA cases and then tested for its ability to distinguish benign from malignant cases. The MNN was 100% accurate for classi- fying benign and malignant cases, but was only 77% ac- curate for atypical cases.76 AI for rapid on-site evaluation (ROSE) Another area of cytology in which assistance from AI al- gorithms can improve patient care and efficiency is for rapid on-site evaluation (ROSE). For many cytology services, performing ROSE entails a great deal of logistics and effort. Specifically, staffing needs to perform these adequacy as- sessments is time consuming and something AI algorithms may assist with to increase efficiency. Additionally, AI- assisted ROSE can greatly help in resource limited settings by potentially providing adequacy evaluations instead of relying solely on cytologists. A few pilot studies have been reported in the literature that describe AI algorithms for ROSE developed and tested at a single institution. All these studies were developed using a CNN trained on a specific organ of interest. The study by researchers Ai et al. developed a CNN based AI algorithm for ROSE during bronchoscopy.77 Trained on 627 ROSE slides, the algorithm achieved an AUC of 0.98 with an accuracy of 84.57%. This study also compared the ac- curacy of determining adequacy on-site to a senior pathol- ogist who was 96.90% accurate, and 2 junior cytopathologists with an accuracy of 83.30%. A separate study by Lin et al reported an AI algorithm designed to support ROSE for GI endoscopic ultrasound-FNAs.78 Their algorithm was also CNN-based and was trained on 467 Diff- Quik stained smear samples. The accuracy was 83.4% similar to the bronchoscopy ROSE AI algorithm, with a sensitivity of 79.1%. Neither study described turnaround times for AI assisted ROSE, which is a crucial consider- ation, especially in clinical settings where ROSE is already established. Additionally, both studies were conducted at a single institution and therefore this raises concerns for overfitting and questions the generalizability of this work in other settings. However, as pilot studies they show promise in the development of AI algorithms for ROSE with a need for future research in this area. Survey of current practice in cytology AI The ASC Digital Cytology Task Force survey, detailed in part 1 of the white paper, sought to evaluate the current practices and perspectives of AI algorithms in cytology. The vast majority of survey participants indicated that they do not use AI algorithms in their clinical practice for either surgical pathology (84%) or cytology (77%). This is not surprising, as to date very few commercial AI algorithms have been adequately validated and adopted in cytology practice, except for the ThinPrep Imager and FocalPoint GS Imaging Systems. While studies of AI models for various cytology specimens describe good results, most are pilot studies with few if any employed in a prospective clinical setting. Other barriers to integration of new generation AI tools in cytology that remain to be overcome are the lack of AI algorithms approved by regulatory agencies such as the FDA, as well as the appropriate infrastructure needed to deploy such algorithms. When asked about the comfort level of using AI algorithms for clinical cytology purposes, almost half of participants (49%) responded that they would be completely or very comfortable using an AI algorithm that has been approved by the FDA or other regulatory agency. In contrast, there were 8% of participants who stated that they would feel completely or very comfortable using an AI algorithm without approval by the FDA or other regulatory agency. For those users that do use AI in clinical practice, utili- zation differs between cytology and surgical pathology practice. The vast majority of participants that use AI for cytology utilize AI for screening Pap tests, which could include screening systems such as the Hologic ThinPrep Imaging System or the BD FocalPoint GS. Participants that utilized AI for surgical pathology practice mainly used AI algorithms for biomarker quantification (65%). When it comes to future AI algorithms, participants would like to see tools developed to help screen cytology specimens (73%), assist with biomarker quantification (62%), and leverage algorithms that would help with QA/QC (38%). Most of the recent algorithms developed with deep learning models re- ported in the published pathology literature focused on 106 D. Kim et al. diagnostics rather than screening, as well as the detection of histologic lesions rather than cytology. The survey results also highlight the need to develop AI models for improving workflow rather than merely replacing diagnostic human expertise. Evaluating and validating AI for cytology A variety of factors are likely to affect the implementation and successful outcome of adopting AI technology in a clinical cytology setting. Among the factors to consider are (1) intended use case (eg screening, diagnostics, etc.): and reasons for adopting AI technology (eg automation, stan- dardization, etc.); (2) resources required, including hard- ware, software and personnel cost; (3) evaluation of AI performance and clinical validation; (4) integration into cytology workflow, including digitization of cytology slides and storage of digital images, integration image manage system and AI into laboratory information system, and cy- tologists and pathologists’ workstation setup; (5) adherence to federal, state, and individual laboratory policies, regula- tions and/or guidelines; and (6) return of investment and/or reimbursement. Published AI cytology studies have utilized a variety of different AI models such as neural networks, support vector machines, k-nearest neighbors, or a combination (ensemble) of models.32,34,55,61,79 Currently, not one model has been shown to be superior in pathology evaluation. Different models may be better suited based on specimen type and clinical need. While most studies of cytology AI models demonstrate good sensitivity (>80%) and specificity (>80%) on average, these AI models are not always pub- lished with sufficient independent validation. Moreover, performance of these AI systems may differ based upon variations within the clinical setting and needs of the cytology team. There is also a question of what performance characteristics are best used to evaluate or compare an AI model. To address this question, the ASC Digital Cytology Task Force survey asked participants what data points would be helpful in evaluating AI algorithms developed for cytology practice. Beyond the standard set of performance metrics such as sensitivity and specificity, other useful metrics for AI model evaluation include area under the curve (AUC) analysis, F1 Score, and mean squared error, amongst others. However, most survey participants preferred relying on familiar statistics measures of evalua- tion such as sensitivity, specificity, and AUC. Other metrics participants considered helpful were providing end users with a gallery of predicted images to review or a prediction heat map overlaid on the slide of interest. The guidelines for validating AI for clinical pathology practice are still lacking. Indeed, most participants felt that best practice guidelines and recommendations for clinical AI usage would be either very useful (38%) or useful (36%). While standard clinical validation steps can be applied to AI algorithms as with other clinical tests, there may be AI specific issues that need to be evaluated. As more AI al- gorithms arise for application in cytology practice, there is a need to help guide pathologists in the evaluation of AI al- gorithms and how best to integrate them into practice. As with any laboratory test, pathologists must be aware of the benefits and limitations of using an AI tool in a clinical setting. Due to insufficient data and wide variability of AI algo- rithms in cytology, the ASC Digital Cytology Task Force feels it is premature to recommend a specific number of cases needed for validatingAI algorithms in general at thismoment. The ASC Digital Cytology Task Force also notes that AI al- gorithms can vary drastically in their intended use (e.g. QA, billing, diagnostics) and that the below considerations may not apply depending on their usage. However, the following can be considered during the validation/verification study of AI algorithms in the cytology laboratory (Table 3). Reporting recommendation and legal considerations Reimbursement for using computational pathology tools in clinical practice is necessary to further support adoption. In the United States, reimbursement of computer-assisted screening of Pap tests has existed for many years: current procedural terminology (CPT) code 88174 is reported for cervical or vaginal cytology specimen collected in preser- vative fluid by automated thin layer preparation and screened by automated system under physician supervision and CPT code 88175 is reported when automated screening is followed by manual rescreening or review under physi- cian supervision. The American Medical Association (AMA) has recently published Appendix S to the CPT coding scheme which outlines the approach to categorizing AI applications. Computer-aided detection (CAD) algo- rithms used by radiologists currently have CPT codes and are classified in the “Assistive” category, the lowest level of AI in the taxonomy. These CAD algorithms are similar to pathology AI algorithms currently in development in that they detect areas of interest for human interpretation. CPT coding protocols for more advanced “Augmentative” and “Autonomous” categories are in their infancy and other specialties will likely pave the way before pathology. Since the FDA has not yet approved any digital platform for primary diagnosis of cytology specimens in the United States, any application of WSI such as AI for cytology would need to be validated as a laboratory developed test (LDT). As such, the presence of a disclaimer in a cytology report that relied on using AI technology is advisable. The legal implications of applying deep learning AI al- gorithms to analyze cytology specimens other than Pap tests is uncertain.81 Machine learning algorithms have been used for many years to assist with screening Pap tests, even allowing slides to be signed out as negative without full slide review by a human. This has not precipitated a fundamental Table 3 Summary of considerations for validating artificial intelligence (AI) algorithms in cytology. Considerations 1. The validation study should closely emulate the clinical workflow practice intended for the AI algorithm and comprises of samples intended for the AI algorithm. 2. Several performance metrics including precision, accuracy, sensitivity, and specificity of the AI algorithm should be evaluated during the validation study. 3. While there is no specific number of cases that should be included for AI validation, a sufficient number of cases reflecting the diversity and complexity of the samples encountered at the laboratory should be included to adequately evaluate the AI algorithms performance metrics (see note). 4. Formal personnel training is necessary for familiarization before using AI algorithms for clinical practice. 5. Establishment of a quality assurance program is recommended after AI implementation to detect any errors or issues that may arise. 6. If the intended use of the AI algorithm changes from the initial validation, a revalidation for the new use should be performed. Notes I. The CAP published guidelines for HER2 quantitative imaging analysis (QIA) to recommend that 20 positive and 20 negative cases are needed to validate FDA approved QIA or double the numbers for QIA without FDA approval.80 If the AI algorithms to be validated for cytology practice are similar to HER2 QIA, the validation study may include a sample set of at least 20 cases per AI classification category for AI algorithm approved/certified by regulatory agency or 40 cases for others. The number may be modified based on the performance of AI algorithm. II. In the ASC Digital Cytology Task Force survey, regarding the number of cases needed to validate AI algorithm in cytology, the majority of respondents answered a number in the range of 100-200 (60% of 231 respondents), thus, a number within this range can be considered. Artificial intelligence in cytology 107 change in the legal landscape. The “black box” nature of advanced deep learning algorithms as well as their ability to continuously adapt in response to new data raises profound questions about the ultimate responsibility in the event of errors and mistakes. Physicians, health care systems, and al- gorithm developers could all potentially bear responsibility, but formulation of a coherent and binding legal framework in this context is necessary.82 It is anticipated that cytopathol- ogists will likely accept that liability remains with them in scenarios where AI provides assistance, especially when the cytology report is ultimately signed out by a human.83 For this reason, it is important to consider whether the analysis of images shown to the cytopathologist is retained and archived in case of the need for medicolegal review, in addition to glass slide and WSI retention requirements. Matters may become more problematic in scenarios where AI algorithms may be used to make a diagnosis in the absence of human oversight. Until these issues are resolved, liability issues are likely to strongly inhibit deployment of these more advanced systems.84 Conclusion and future direction In recent years, the largest area of global growth in any industry has been AI. This holds true for the application of AI to medicine, including cytology. Cytology has a long history with AI, given that computers have been leveraged for decades now for computer-assisted screening of Pap tests. As more complex AI models are developed and applied to non-GYN use cases in cytology, guidance and recommendations are needed in order to best assess and incorporate these tools in clinical practice. The second part of this white paper accordingly addresses AI as it pertains to cytology to offer such recommendations for AI us in clinical cytology practice. A collective effort is needed from the cytology community to share their experience when using AI in order to establish more evidence-based guidelines. Funding sources No specific funding was disclosed. Conflict of interest disclosures OL: Consultant for Hologic and Jansen. MMB: Consultant and/or advisory board member of Visiopharm, Aiforia, and Roche Diagnostics. AVP: Advisory board for PathPresenter, Consultant for PAIGE. LP: Consultant for Hamamatsu & AIxMed, advisory board for Ibex and NTP, co-founder of LeanAP Innovators & Placenta AI. ZL: Consultant for PathAI; Advisory Board for Roche Diagnostics. All other authors have no financial relationship to disclose. CRediT authorship contribution statement David Kim: Writing e review & editing, Writing e orig- inal draft. Kaitlin E. Sundling: Writing e review & edit- ing, Writing e original draft. Renu Virk: Writing e review & editing, Writing e original draft. Michael J. Thrall: Writing e review & editing, Writing e original draft. Susan Alperstein: Writing e review & editing. Marilyn M. Bui: Writing e review & editing. Heather Chen-Yost: Writing e review & editing. Amber D. Donnelly: Writing e review & editing. Oscar Lin: Writing e review & editing. Xiaoying Liu: Writing e review & editing. Emilio Madrigal: Writing e review & editing. Pamela 108 D. Kim et al. Michelow: Writing e review & editing. Fernando C. Schmitt: Writing e original draft. Philippe R. Vielh: Writing e review & editing. Maureen F. Zakowski: Writing e review & editing. Anil V. Parwani: Writing e original draft. Elizabeth Jenkins: Writing e review & editing. Momin T. Siddiqui: Writing e review & editing. Liron Pantanowitz: Writing e review & editing, Writing e original draft, Conceptualization. Zaibo Li: Writing e review & editing, Writing e original draft, Conceptualization. References 1. Halford JA, Wright RG, Ditchmen EJ. Quality assurance in cervical cytology screening. Comparison of rapid rescreening and the PAPNET Testing System. Acta Cytol. 1997;41:79e81. 2. Halford JA, Wright RG, Ditchmen EJ. Prospective study of PAPNET: review of 25,656 Pap smears negative on manual screening and rapid rescreening. Cytopathology. 1999;10:317e323. 3. Mango LJ, Valente PT. Neural-network-assisted analysis and micro- scopic rescreening in presumed negative cervical cytologic smears. A comparison. Acta Cytol. 1998;42:227e232. 4. Chang AR, Lin WF, Chang A, Chong KS. Can technology expedite the cervical cancer screening process? A Hong Kong experience using the AutoPap primary screening system with location-guided screening capability. Am J Clin Pathol. 2002;117:437e443. 5. Huang TW, Lin TS, Lee JS. Sensitivity studies of AutoPap system location-guided screening of cervical-vaginal cytologic smears. Acta Cytol. 1999;43:363e368. 6. Lee JS, Kuan L, Oh S, Patten FW, Wilbur DC. A feasibility study of the AutoPap system location-guided screening. Acta Cytol. 1998;42: 221e226. 7. Biscotti CV, Dawson AE, Dziura B, et al. Assisted primary screening using the automated ThinPrep Imaging System. Am J Clin Pathol. 2005;123:281e287. 8. Wilbur DC, Black-Schaffer WS, Luff RD, et al. The Becton Dickinson FocalPoint GS Imaging System: clinical trials demonstrate significantly improved sensitivity for the detection of important cervical lesions. Am J Clin Pathol. 2009;132:767e775. 9. Colgan TJ, Bon N, Clipsham S, et al. A validation study of the Focal- Point GS imaging system for gynecologic cytology screening. Cancer Cytopathol. 2013;121:189e196. 10. Levi AW, Chhieng DC, Schofield K, Kowalski D, Harigopal M. Imple- mentation of FocalPoint GS location-guided imaging system: experi- ence in a clinical setting. Cancer Cytopathol. 2012;120:126e133. 11. Stein MD, Fregnani JH, Scapulatempo C, Mafra A, Campacci N, Longatto-Filho A. Performance and reproducibility of gynecologic cytology interpretation using the FocalPoint system: results of the RO- DEO Study Team. Am J Clin Pathol. 2013;140:567e571. 12. Kitchener HC, Blanks R, Cubie H, et al. MAVARIC e a comparison of automation-assisted and manual cervical screening: a randomised controlled trial. Health Technol Assess. 2011;15:iiieiv. ix-xi, 1-170. 13. Kitchener HC, Blanks R, Dunn G, et al. Automation-assisted versus manual reading of cervical cytology (MAVARIC): a randomised controlled trial. Lancet Oncol. 2011;12:56e64. 14. Roberts JM, Thurloe JK, Bowditch RC, et al. A three-armed trial of the ThinPrep imaging system. Diagn Cytopathol. 2007;35:96e102. 15. Palmer TJ, Nicoll SM, McKean ME, et al. Prospective parallel random- ized trial of the MultiCyte� ThinPrep(�) imaging system: the Scottish experience. Cytopathology. 2013;24:235e245. 16. Pantanowitz L, Sinard JH, Henricks WH, et al. Validating whole slide imaging for diagnostic purposes in pathology: guideline from the Col- lege of American Pathologists Pathology and Laboratory Quality Cen- ter. Arch Pathol Lab Med. 2013;137:1710e1722. 17. El-Garby EA, Parwani AV, Pantanowitz L. Whole slide imaging: widening the scope of cytopathology. Diagn Histopathol. 2014;20: 456e461. 18. Zhao C, Wu T, Ding X, et al. International telepathology consultation: three years of experience between the University of Pittsburgh Medical Center and KingMed Diagnostics in China. J Pathol Inform. 2015;6: 63. 19. Amin M, Parwani AV, Pantanowitz L. Digital Imaging. New York, NY: Springer; 2014. 20. Crowell EF, Bazin C, Thurotte V, et al. Adaptation of CytoProcessor for cervical cancer screening of challenging slides. Diagn Cytopathol. 2019;47:890e897. 21. Crowell EF, Bazin C, Saunier F, et al. CytoProcessorTM: a new cervi- cal cancer screening system for remote diagnosis. Acta Cytol. 2019;63: 215e223. 22. Bao H, Bi H, Zhang X, et al. Artificial intelligence-assisted cytology for detection of cervical intraepithelial neoplasia or invasive cancer: a multicenter, clinical-based, observational study. Gynecol Oncol. 2020;159:171e178. 23. Dounias G, Bjerregaard B, Jantzen J, et al. Automated identification of cancerous smears using various competitive intelligent techniques. Oncol Rep. 2006;15 Spec no.:1001e1006. 24. Giovagnoli MR, Cenci M, Olla SV, Vecchione A. Cervical false nega- tive cases detected by neural network-based technology. Critical review of cytologic errors. Acta Cytol. 2002;46:1105e1109. 25. Ikeda K, Sakabe N, Maruyama S, et al. Relationship between liquid- based cytology preservative solutions and artificial intelligence: liquid-based cytology specimen cell detection using YOLOv5 deep convolutional neural network. Acta Cytol. 2022;66:542e550. 26. Sheela Shiney T, Rose RJ. Deep auto encoder based extreme learning system for automatic segmentation of cervical cells. IETE J Res. 2021; 69:1e21. 27. Bora K, Chowdhury M, Mahanta LB, Kundu MK, Das AK. Automated classification of Pap smear images to detect cervical dysplasia. Comput Methods Programs Biomed. 2017;138:31e47. 28. Chankong T, Theera-Umpon N, Auephanwiriyakul S. Automatic cervi- cal cell segmentation and classification in Pap smears. Comput Methods Programs Biomed. 2014;113:539e556. 29. Song Y, Zhang L, Chen S, Ni D, Lei B, Wang T. Accurate segmenta- tion of cervical cytoplasm and nuclei based on multiscale convolutional network and graph partitioning. IEEE Trans Biomed Eng. 2015;62: 2421e2433. 30. Song Y, Zhang L, Chen S, et al. A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei. In: Paper pre- sented at: 2014 36th Annual International Conference of the IEEE En- gineering in Medicine and Biology Society, EMBC, Chicago, IL 2014; 2014. 31. Zhao L, Li K, Wang M, et al. Automatic cytoplasm and nuclei segmen- tation for color cervical smear image using an efficient gap-search MRF. Comput Biol Med. 2016;71:46e56. 32. Tang HP, Cai D, Kong YQ, et al. Cervical cytology screening facili- tated by an artificial intelligence microscope: a preliminary study. Can- cer Cytopathol. 2021;129:693e700. 33. Holmström O, Linder N, Kaingu H, et al. Point-of-care digital cytology with artificial intelligence for cervical cancer screening in a resource- limited setting. JAMA Netw Open. 2021;4:e211740. 34. Bao H, Sun X, Zhang Y, et al. The artificial intelligence-assisted cytology diagnostic system in large-scale cervical cancer screening: a population-based cohort study of 0.7 million women. Cancer Med. 2020;9:6896e6906. 35. Alsalatie M, Alquran H, Mustafa WA, Mohd Yacob Y, Ali Alayed A. Analysis of cytology pap smear images based on ensemble deep learning approach. Diagnostics (Basel). 2022;12:2756. 36. Tao X, Chu X, Guo B, et al. Scrutinizing high-risk patients from ASC- US cytology via a deep learning model. Cancer Cytopathol. 2022;130: 407e414. http://refhub.elsevier.com/S2213-2945(23)00246-6/sref1 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref1 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref1 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref1 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref2 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref2 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref2 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref2 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref3 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref3 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref3 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref3 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref4 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref4 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref4 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref4 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref4 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref5 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref5 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref5 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref5 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref6 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref6 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref6 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref6 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref7 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref7 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref7 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref7 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref8 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref8 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref8 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref8 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref8 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref9 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref9 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref9 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref9 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref10 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref10 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref10 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref10 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref11 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref11 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref11 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref11 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref11 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref12 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref12 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref12 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref12 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref12 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref13 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref13 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref13 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref13 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref14 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref14 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref14 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref15 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref16 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref16 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref16 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref16 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref16 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref17 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref17 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref17 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref17 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref18 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref18 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref18 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref18 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref19 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref19 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref20 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref20 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref20 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref20 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref21 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref21 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref21 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref21 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref22 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref22 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref22 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref22 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref22 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref23 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref23 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref23 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref23 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref24 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref24 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref24 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref24 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref25 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref25 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref25 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref25 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref25 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref26 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref26 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref26 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref26 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref27 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref27 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref27 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref27 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref28 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref28 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref28 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref28 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref29 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref29 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref29 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref29 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref29 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref30 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref30 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref30 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref30 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref30 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref31 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref31 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref31 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref31 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref32 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref32 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref32 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref32 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref33 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref33 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref33 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref34 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref34 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref34 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref34 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref34 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref35 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref35 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref35 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref36 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref36 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref36 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref36 Artificial intelligence in cytology 109 37. Kanavati F, Hirose N, Ishii T, Fukuda A, Ichihara S, Tsuneki M. A deep learning model for cervical cancer screening on liquid-based cytology specimens in whole slide images. Cancers (Basel). 2022; 14:1159. 38. Sornapudi S, Brown GT, Xue Z, Long R, Allen L, Antani S. Comparing deep learning models for multi-cell classification in liquid- based cervical cytology image. AMIA Annu Symp Proc. 2019;2019: 820e827. 39. Zhu X, Li X, Ong K, et al. Hybrid AI-assistive diagnostic model per- mits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat Commun. 2021;12:3541. 40. Zhang L, Lu L, Nogues I, Summers RM, Liu S, Yao J. DeepPap: deep convolutional networks for cervical cell classification. IEEE J Biomed Health Inform. 2017;21:1633e1643. 41. Martin V, Kim TH, Kuan M, Kuko M, Pourhomayoun M, Martin S. A more comprehensive cervical cell classification using convolutional neural network. J Am Soc Cytopathol. 2018;7:S66. 42. Wentzensen N, Lahrmann B, Clarke MA, et al. Accuracy and effi- ciency of deep-learning-based automation of dual stain cytology in cer- vical cancer screening. J Natl Cancer Inst. 2021;113:72e79. 43. Delga A, Goffin F, Kridelka F, Marée R, Lambert C, Delvenne P. Eval- uation of CellSolutions BestPrep� automated thin-layer liquid-based cytology Papanicolaou slide preparation and BestCyte� cell sorter im- aging system. Acta Cytol. 2014;58:469e477. 44. Chantziantoniou N. BestCyte� Cell Sorter Imaging System: primary and adjudicative whole slide image rescreening review times of 500 ThinPrep Pap test thin-layers - an intra-observer, time-surrogate anal- ysis of diagnostic confidence potentialities. J Pathol Inform. 2022;13: 100095. 45. Chantziantoniou N. BestCyte� primary screening of 500 ThinPrep Pap Test thin-layers: 3 Cytologists’ Interobserver diagnostic concordance with predicate manual microscopy relative to Truth Reference diagno- ses defining NILM, ASCUSþ, LSILþ, and ASCHþ thresholds for specificity, sensitivity, and equivalency grading. J Pathol Inform. 2023;14:100182. 46. Ikenberg H, Lieder S, Ahr A, Wilhelm M, Schön C, Xhaja A. Compar- ison of the hologic genius digital diagnostics system with the ThinPrep imaging systemda retrospective assessment. Cancer Cytopathol. 2023;131:424e432. 47. Konfoong Biotech International Co. Available at: https://www. kfbiopathology.com/pathology-ai/cervicalcancer-diagnosis/. Accessed August 1, 2023. 48. OptraSCAN. Available at: https://optrascan.com/blogs/cytosia-to- advance-cytology-screening-throughunmatched-image-quality-and-arti ficial-intelligence-based-image-analysis-solution/. Accessed August 1, 2023. 49. Techcyte. Available at: https://techcyte.com/products/cervical-cancer- screen/?cn-reloadedZ1. Accessed August 1, 2023. 50. Muralidaran C, Dey P, Nijhawan R, Kakkar N. Artificial neural network in diagnosis of urothelial cell carcinoma in urine cytology. Diagn Cytopathol. 2015;43:443e449. 51. Sanghvi AB, Allen EZ, Callenberg KM, Pantanowitz L. Performance of an artificial intelligence algorithm for reporting urine cytopathology. Cancer Cytopathol. 2019;127:658e666. 52. Vaickus LJ, Suriawinata AA, Wei JW, Liu X. Automating the Paris System for urine cytopathology-a hybrid deep-learning and morpho- metric approach. Cancer Cytopathol. 2019;127:98e115. 53. Awan R, Benes K, Azam A, et al. Deep learning based digital cell pro- files for risk stratification of urine cytology images. Cytometry A. 2021; 99:732e742. 54. Nojima S, Terayama K, Shimoura S, et al. A deep learning system to diagnose the malignant potential of urothelial carcinoma cells in cytology specimens. Cancer Cytopathol. 2021;129:984e995. 55. Ou Y-C, Tsao T-Y, Chang M-C, et al. Evaluation of an artificial in- telligence algorithm for assisting the Paris System in reporting uri- nary cytology: a pilot study. Cancer Cytopathol. 2022;130: 872e880. 56. Lebret T, Pignot G, Colombel M, et al. Artificial intelligence to improve cytology performances in bladder carcinoma detection: results of the VisioCyt test. BJU Int. 2022;129:356e363. 57. Lebret T, Paoletti X, Pignot G, et al. Artificial intelligence to improve cytology performance in urothelial carcinoma diagnosis: results from validation phase of the French, Multicenter, Prospective VISIOCYT1 Trial. World J Urol. 2023;41:2381e2388. 58. Barwad A, Dey P, Susheilia S. Artificial neural network in diagnosis of metastatic carcinoma in effusion cytology. Cytometry B Clin Cytom. 2012;82B:107e111. 59. Park HS, Chong Y, Lee Y, et al. Deep learning-based computational cytopathologic diagnosis of metastatic breast carcinoma in pleural fluid. Cells. 2023;12:1847. 60. Xie X, Fu CC, Lv L, et al. Deep convolutional neural network-based classification of cancer cells on cytological pleural effusion images. Mod Pathol. 2022;35:609e614. 61. Tosun AB, Yergiyev O, Kolouri S, Silverman JF, Rohde GK. Detec- tion of malignant mesothelioma using nuclear structure of mesothelial cells in effusion cytology specimens. Cytometry. 2015;87:326e333. 62. Mavropoulos A, Johnson C, Lu V, et al. Artificial intelligence-driven morphology-based enrichment of malignant cells from body fluid. Mod Pathol. 2023;36:100195. 63. Sanyal P, Mukherjee T, Barui S, Das A, Gangopadhyay P. Artificial intelligence in cytopathology: a neural network to identify papillary carcinoma on thyroid fine-needle aspiration cytology smears. J Pathol Inform. 2018;9:43. 64. DovD,KovalskySZ,Cohen J,RangeDE,HenaoR, CarinL. Thyroid can- cer malignancy prediction fromwhole slide cytopathology images. In: Pa- per presented at: Machine Learning for Healthcare Conference, Ann Arbor, MI; 2019. 65. Gopinath B, Shanthi N. Development of an automated medical diag- nosis system for classifying thyroid tumor cells using multiple classifier fusion. Technol Cancer Res Treat. 2015;14:653e662. 66. Guan Q, Wang Y, Ping B, et al. Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carci- nomas in cytological images: a pilot study. J Cancer. 2019;10:4876. 67. Savala R, Dey P, Gupta N. Artificial neural network model to distin- guish follicular adenoma from follicular carcinoma on fine needle aspi- ration of thyroid. Diagn Cytopathol. 2018;46:244e249. 68. Dov D, Elliott Range D, Cohen J, et al. Deep-learningebased screening and ancillary testing for thyroid cytopathology. Am J Pathol. 2023;193:1185e1194. 69. Jang J, Kim YH, Westgate B, et al. Screening adequacy of unstained thyroid fine needle aspiration samples using a deep learning-based clas- sifier. Sci Rep. 2023;13:13525. 70. Teramoto A, Yamada A, Kiriyama Y, et al. Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network. Inform Med Unlocked. 2019;16:100205. 71. Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated classifi- cation of lung cancer types from cytological images using deep convo- lutional neural networks. Biomed Res Int. 2017;2017:4067832. 72. Teramoto A, Tsukamoto T, Yamada A, et al. Deep learning approach to classification of lung cytological images: two-step training using actual and synthesized images by progressive growing of generative adversarial networks. PLoS One. 2020;15:e0229951. 73. Gonzalez D, Dietz RL, Pantanowitz L. Feasibility of a deep learning al- gorithm to distinguish large cell neuroendocrine from small cell lung car- cinoma in cytology specimens. Cytopathology. 2020;31:426e431. 74. Dey P, Logasundaram R, Joshi K. Artificial neural network in diag- nosis of lobular carcinoma of breast in fine-needle aspiration cytology. Diagn Cytopathol. 2013;41:102e106. 75. Subbaiah R, Dey P, Nijhawan R. Artificial neural network in breast le- sions from fine-needle aspiration cytology smear. Diagn Cytopathol. 2014;42:218e224. 76. Momeni-Boroujeni A, Yousefi E, Somma J. Computer-assisted cyto- logic diagnosis in pancreatic FNA: an application of neural networks to image analysis. Cancer Cytopathol. 2017;125:926e933. http://refhub.elsevier.com/S2213-2945(23)00246-6/sref37 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref37 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref37 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref37 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref38 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref38 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref38 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref38 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref38 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref39 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref39 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref39 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref40 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref40 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref40 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref40 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref41 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref41 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref41 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref42 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref42 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref42 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref42 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref43 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref44 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref45 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref46 https://www.kfbiopathology.com/pathology-ai/cervicalcancer-diagnosis/ https://www.kfbiopathology.com/pathology-ai/cervicalcancer-diagnosis/ https://optrascan.com/blogs/cytosia-to-advance-cytology-screening-throughunmatched-image-quality-and-artificial-intelligence-based-image-analysis-solution/ https://optrascan.com/blogs/cytosia-to-advance-cytology-screening-throughunmatched-image-quality-and-artificial-intelligence-based-image-analysis-solution/ https://optrascan.com/blogs/cytosia-to-advance-cytology-screening-throughunmatched-image-quality-and-artificial-intelligence-based-image-analysis-solution/ https://techcyte.com/products/cervical-cancer-screen/?cn-reloaded=1 https://techcyte.com/products/cervical-cancer-screen/?cn-reloaded=1 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref50 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref50 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref50 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref50 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref51 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref51 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref51 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref51 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref52 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref52 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref52 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref52 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref53 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref53 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref53 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref53 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref54 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref54 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref54 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref54 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref55 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref55 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref55 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref55 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref55 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref56 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref56 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref56 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref56 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref57 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref57 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref57 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref57 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref57 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref58 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref58 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref58 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref58 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref59 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref59 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref59 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref60 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref60 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref60 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref60 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref61 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref61 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref61 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref61 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref62 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref62 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref62 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref63 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref63 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref63 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref63 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref64 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref64 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref64 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref64 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref65 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref65 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref65 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref65 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref66 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref66 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref66 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref67 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref67 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref67 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref67 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref68 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref68 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref68 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref68 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref68 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref69 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref69 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref69 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref70 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref70 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref70 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref71 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref71 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref71 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref72 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref72 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref72 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref72 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref73 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref73 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref73 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref73 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref74 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref74 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref74 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref74 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref75 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref75 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref75 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref75 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref76 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref76 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref76 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref76 110 D. Kim et al. 77. Ai D, Hu Q, Chao Y-C, et al. Artificial intelligence-based rapid on-site cytopathological evaluation for bronchoscopy examinations. Intell Base Med. 2022;6:100069. 78. Lin R, Sheng LP, Han CQ, et al. Application of artificial intelligence to digital-rapid on-site cytopathology evaluation during endoscopic ultrasound-guided fine needle aspiration: a proof-of-concept study. J Gastroenterol Hepatol. 2023;38:883e887. 79. Landau MS, Pantanowitz L. Artificial intelligence in cytopathology: a review of the literature and overview of commercial landscape. J Am Soc Cytopathol. 2019;8:230e241. 80. Bui MM, Riben MW, Allison KH, et al. Quantitative image analysis of human epidermal growth factor receptor 2 immunohistochemistry for breast cancer: guideline from the College of American Pathologists. Arch Pathol Lab Med. 2019;143:1180e1195. 81. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare. Elsevier:295e336. https://doi.org/10.1016/B978-0-12-818438-7. 00012-5. 82. Sullivan HR, Schweikart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA J Ethics. 2019; 21:E160eE166. 83. Jaremko JL, Azar M, Bromwich R, et al. Canadian association of radiologists white paper on ethical and legal issues related to artifi- cial intelligence in radiology. Can Assoc Radiol J. 2019;70: 107e118. 84. Mezrich JL. Is artificial intelligence (AI) a pipe dream? Why legal is- sues present significant hurdles to AI autonomy. AJR Am J Roentgenol. 2022;219:152e156. http://refhub.elsevier.com/S2213-2945(23)00246-6/sref77 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref77 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref77 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref78 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref78 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref78 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref78 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref78 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref79 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref79 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref79 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref79 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref80 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref80 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref80 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref80 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref80 https://doi.org/10.1016/B978-0-12-818438-7.00012-5 https://doi.org/10.1016/B978-0-12-818438-7.00012-5 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref82 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref82 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref82 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref82 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref83 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref83 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref83 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref83 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref83 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref84 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref84 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref84 http://refhub.elsevier.com/S2213-2945(23)00246-6/sref84 Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the Amer ... Introduction Artificial intelligence applications in GYN cytology Artificial intelligence systems using glass slides Artificial intelligence systems using digital whole slide images CytoProcessor BestCyte Hologic Genius Landing Med KFBIO CytoSiA Techcyte AI in non-GYN cytology AI for urine cytology AI for effusion cytology AI for fine needle aspiration (FNA) cytology AI for rapid on-site evaluation (ROSE) Survey of current practice in cytology AI Evaluating and validating AI for cytology Reporting recommendation and legal considerations Conclusion and future direction Funding sources Conflict of interest disclosures CRediT authorship contribution statement References