In Cancer Medicine, The Borderline Cases Are The Most Difficult To Diagnose

At Massachusetts General Hospital, a radiologist once recalled a mammogram that looked perfectly normal to the human eye. The image was nearly unremarkable, the sort of case that might have been signed off without hesitation. But the AI system reviewing it raised a flag: high risk. A second read confirmed the machine’s suspicion. Hidden in the corner of the breast was a tiny lesion—an early cancer, invisible in plain sight, that would hav e been missed.

For pathologists and radiologists alike, the borderline cases, when noticed, are the troubling ones. The tiny clusters of atypical cells that might mean nothing—or might mean everything. The sliver of tissue where a single misplaced judgment could determine a patient’s fate. False positives risk unnecessary fear and procedures. False negatives risk lives lost. The stakes are immense.

The rise of AI in pathology

In 2016, Dr. Lily Peng, then a computer scientist at Google, was already known for her groundbreaking work using AI to detect diabetic retinopathy from retinal images. But she soon turned to one of medicine’s most complex challenges: cancer pathology.

“Cancer is not one disease,” Peng explained. “It’s hundreds of diseases, each with countless variations. The complexity is staggering—but that’s exactly where AI can make the biggest impact.”

Her team partnered with Memorial Sloan Kettering Cancer Center, home to some of the world’s most experienced pathologists. Among them was Dr. David Klimstra, who had spent decades peering through microscopes at tissue slides. Initially skeptical, he wondered: How could a computer possibly grasp what I’ve spent 30 years training my eyes to see?

But as the models improved, Klimstra began to change his mind. The AI didn’t just match human accuracy—it sometimes found new prognostic markers: subtle patterns in cells that predicted tumor aggressiveness. Features even seasoned pathologists had never noticed.

“It wasn’t replacing human expertise,” Klimstra reflected. “It was augmenting it—helping us see things we didn’t know we were missing.”

From microscope to monitor

The shift began when glass slides were digitized. Once a tissue sample could be scanned into a gigapixel image, algorithms could comb through it as systematically as a radiologist scrolling through a CT scan.

One of the pioneers in this space was Thomas Fuchs, a computational pathologist who helped develop Paige Prostate, an AI tool that became the first to receive FDA authorization for assisting in cancer detection. In clinical trials, Paige was able to highlight suspicious regions on prostate slides with accuracy rivaling expert pathologists.

Recent breakthrough: foundation models transform pathology

In 2024, the field witnessed a paradigm shift with the emergence of foundation models specifically designed for pathology. Dr. Faisal Mahmood’s team at Harvard released UNI, a general-purpose model trained on over 100 million images from 100,000 slides representing both diseased and healthy organs. This was followed by PathChat, an AI assistant that could have “conversations” about uploaded pathology images and generate reports—receiving FDA breakthrough device designation within months of its release.

The impact was immediate. At Cleveland Clinic, Dr. Daniel Roberts observed that these AI copilots were streamlining laboratory workflows in ways previously unimaginable. “Tools for biomarker interpretation remain at the forefront,” Roberts noted, “but we’re seeing a growing number of solutions for workflow efficiencies at every step, from slide preparation to case signout.”

For everyday practice, this meant:

  • Earlier detection of prostate cancers that might otherwise slip past.

  • Faster identification of tiny breast cancer metastases in lymph nodes.

  • Minutes saved per case on routine scanning—time that could instead be spent on the rare and difficult cases that truly required expert judgment.

  • Foundation models that could analyze any tissue type without specific training, adapting to rare cancers previously underrepresented in datasets.

What once felt like an endless stream of microscope slides became a workflow where AI did the first pass, flagging what needed human attention. Pathologists were no longer needle-hunting in haystacks.

Clinical impact

AI-powered pathology has shown particular strength in the places where human judgment is most fragile: the borderline calls.

  • In breast cancer screening, deep learning systems have outperformed radiologists in both UK and US datasets, reducing both false positives and false negatives.

  • In prostate cancer, AI overlays can help distinguish ambiguous, borderline cases of glandular irregularity that previously led to diagnostic disagreements.

  • In cervical pathology, early trials suggest AI may outperform generalist pathologists in spotting precancerous lesions.

The CAMELYON challenge: a turning point

The CAMELYON16/17 challenge marked a watershed moment in digital pathology. AI models tasked with detecting breast cancer metastases in lymph nodes not only matched but exceeded the performance of a panel of pathologists working under time constraints designed to mimic routine workflows. Google’s LYNA (LYmph Node Assistant) emerged from this challenge, enabling pathologists to detect significantly more micrometastases with shorter review times.

At Stanford, pediatric radiologist Dr. Kristen Yeom faced a particularly thorny problem: brain tumors in children. Because they are rare, most radiologists see only a few cases in their careers. But her team’s AI was able to spot subtle water-diffusion patterns in MRIs—features invisible to the human eye—that reliably signaled tumor infiltration.

“The AI was picking up on things we couldn’t,” Yeom explained. “It gave us early diagnostic certainty in cases where delay could be devastating.”

At Boston Children’s Hospital, Dr. Tina Young Poussaint began applying these tools to complex pediatric cases. The result: diagnostic uncertainty fell by 60%. For parents facing terrifying questions about their child’s future, that level of clarity was priceless.

AI’s Expanding Reach: From Skin to Colon

The success of AI in traditional pathology has sparked innovations across multiple specialties:

Melanoma Detection: By 2024, AI systems analyzing dermoscopic images achieved remarkable accuracy in melanoma detection. Dr. Kamran Avanaki’s OCT-based algorithm at the University of Illinois Chicago could differentiate melanoma from benign nevi with 99% accuracy. In a Stanford Medicine study, AI assistance improved diagnostic sensitivity and specificity across all practitioner levels—medical students improved by 13 points in sensitivity, while even experienced dermatologists saw measurable gains.

Colonoscopy Revolution: Computer-aided detection (CADe) systems have transformed polyp detection during colonoscopy. A 2024 meta-analysis of 44 randomized controlled trials involving nearly 35,000 patients showed that AI-assisted colonoscopy increased adenoma detection rates by 20% and decreased miss rates by 55%. At Yale’s West Haven VA, the implementation of CADe technology has become routine, with endoscopists noting particular value in detecting diminutive polyps that might otherwise be overlooked.

However, these advances come with nuanced challenges. A 2025 Polish study found that continuous exposure to AI assistance might lead to “deskilling”—where endoscopists’ performance without AI actually decreased over time, suggesting the need for balanced training approaches.

Beyond diagnosis

These advances go further than simple yes/no calls. At Stanford, Dr. Michelle Monje, a pediatric neuro-oncologist, now uses AI-driven maps to define the exact borders of brain tumors. Neurosurgeon Dr. Samuel Cheshier describes it as having a “GPS for the brain.” Instead of relying only on what the surgeon can see, AI maps out tumor tissue versus healthy brain in three dimensions. The result: more complete resections, fewer complications, and better outcomes.

Quantitative continuous scoring: the next frontier

At ASCO 2025, AstraZeneca and Daiichi Sankyo presented their Quantitative Continuous Scoring (QCS) computational pathology solution—the first AI-based system to receive FDA Breakthrough Device Designation as a cancer companion diagnostic. The VENTANA TROP2 RxDx assay demonstrated how AI could not just detect cancer, but quantify biomarker expression levels with unprecedented precision, directly informing treatment decisions for targeted therapies.

Meanwhile, in Boston, Dana-Farber Cancer Institute pathologist Dr. Catherine Wu has begun using AI to design personalized cancer vaccines. These vaccines rely on “neoantigens”—mutations unique to each tumor. Identifying them used to take months of laborious work. Now, AI can scan a patient’s genome, flag the best neoantigen candidates, and even simulate how T-cells would recognize them. Early melanoma trials show some patients achieving years of durable remission.

Microsatellite instability: AI sees the invisible

A 2025 breakthrough from Zhengzhou University demonstrated AI’s ability to predict microsatellite instability (MSI) directly from H&E-stained slides in gastroesophageal junction adenocarcinomas—a feat impossible for human pathologists. This matters immensely: MSI status determines eligibility for immunotherapy, but traditional testing requires additional tissue and time. The AI model achieved accuracy rates that could transform treatment timelines for thousands of patients.

AI Strengths and Weaknesses

  • AI’s Strength: In narrow diagnostic tasks like reading slides,

CT scans, or mammograms, AI already outperforms human experts in accuracy, consistency, and speed. That’s not future speculation — it’s demonstrated repeatedly in benchmarks and clinical trials.

  • The Limitation: AI doesn’t “know” the patient. It doesn’t

integrate medical history, lab results, family risk factors, or personal treatment goals. It analyzes inputs, but it doesn’t deliver holistic care.

Where humans still matter:

  • Integration: A doctor decides how to combine the AI’s findings with other clinical details.

  • Communication: Explaining results, discussing options.

  • Accountability: For now, only licensed professionals can legally sign off on treatment decisions.

AI is also extending beyond glass slides. It is now integrated into workflows that combine pathology, radiology, and liquid biopsy signals into unified dashboards. The next frontier isn’t siloed assistance, but whole-patient modeling—where every pixel, every sequence, every cell fragment contributes to a single, integrated clinical picture.

The market responds: explosive growth

The digital pathology market reflects this transformation. In 2025, equipment sales are projected to capture 56.3% of the market revenue, driven by whole-slide scanners and automated imaging systems from companies like Philips, Leica Biosystems, and Roche. The U.S. market alone is expected to grow at 12% annually through 2035, fueled by over 2 million new cancer diagnoses yearly and chronic pathologist shortages.

Major pharmaceutical companies are investing heavily. Johnson & Johnson’s MIA:BLC-FGFR algorithm can detect FGFR+ mutations in bladder cancer from standard H&E slides in minutes—bypassing the need for expensive molecular testing. This democratization of advanced diagnostics could be particularly transformative in resource-limited settings.

For the borderline cases—the moments where human error is most dangerous—AI has already proven itself superior. It catches what human eyes, especially tired ones, miss. It standardizes judgments that would otherwise vary from one pathologist to another. And it frees specialists to focus on confirmation and oversight rather than first-line detection.

The training revolution

Medical education is adapting rapidly. At the 2025 Digital Pathology Congress in London, educators presented new curricula integrating AI from day one of pathology training. Rather than viewing AI as a threat, the next generation of pathologists is learning to work symbiotically with these tools. As one resident noted, “It’s like having the world’s best attending physician looking over your shoulder 24/7.”

What current trends indicate

  • The next five years: AI as decision support. Humans sign off on diagnoses, but AI quietly outperforms them on accuracy.

  • Medium term (5-10 years): AI does the bulk of reading, with humans validating edge cases or communicating results. The ratio shifts — far less human time per case.

  • Long term (10+ years): In some areas (like screening mammography), AI may operate nearly autonomously, with humans involved only in oversight, legal responsibility, or unusual exceptions.

The human element: enhanced, not replaced

Dr. Douglas Flora, Editor-in-Chief of AI in Precision Oncology, offers a compelling analogy: “Remember when GPS first arrived in our cars? The moment when ‘turn left in 500 feet’ replaced unfolding wrinkled maps and arguing over directions? We are experiencing a similar transformation in oncology.”

This doesn’t mean AI is perfect. It still has an error rate. But crucially, that rate is lower than that of humans. The safest system right now is therefore not human-alone or AI-alone, but AI first, with a human double-check. That flips the traditional order, but it’s the reality of where the technology stands today.

These advances, however, are not yet widely available. The software itself is relatively inexpensive and works with existing imaging systems, but adoption is slowed by other factors: hospitals must adopt it despite the financial incentives of the current processes (advanced cancers are major revenue generators), physicians are reluctant to switch to technology that does what they do better and at a significantly lower cost, and reimbursement systems lag behind innovation. As a result, access today is concentrated in large academic and research centers, while other hospitals, even larger urban hospitals, are slow to catch up.

For patients, the new technologies will be adopted sooner or later and mean that cancers are caught when they are most easily treated. Fewer will be missed and fewer unnecessary procedures will be imposed upon misdiagnosed patients. And fewer lives will be unnecessarily lost.

. . .

Our new book, “Surviving Cancer: Hope based on emerging medical science” is now live on Amazon and available to order here.

If you or a loved one is dealing with a diagnosis that you would like to learn more about, you can order the Leading Edge report on the latest medical science, customized to that specific diagnosis, here.

Email me at Emerging Cures if you’d like to talk, rod@emergingcures.org.

You can read about my own cancer journey (My History With Terminal Cancer) here.