Let’s play a game: I take an everyday object, slice it into paper-thin cross sections, give you a handful of those cross sections, and ask you to identify the original object. Do you think you could identify a bowling pin this way? How about a head of broccoli? Michelangelo’s David? Believe it or not, a similar challenge is faced every day by thousands of pathologists who collectively inspect millions of tissue specimens for signs of cancer and other diseases. While precision cancer treatment options continue to advance at near-light speed, cancers today are commonly diagnosed using histological methods which haven’t changed much in the last hundred years!
Enter Dr. Jonathan Liu, a professor in the Departments of Mechanical Engineering, Bioengineering, and Laboratory Medicine and Pathology at the University of Washington, who has a vision for the future. Dr. Liu’s lab develops new optical imaging technologies for use in healthcare and diagnostics; in particular, the lab is pioneering a new imaging technology called Open-Top Light Sheet (OTLS) microscopy, which stands to revolutionize the field. So, what’s wrong with diagnosing cancer the ‘old fashioned way?’
“Traditional histological approaches including hematoxylin and eosin (H&E) staining have been around for a long time because they’re accessible and fairly robust,” note Dr. Liu, “but they have some notable limitations—the tissue sample must be destroyed in the process of sectioning, the procedure is fairly time and labor-intensive, a very small fraction of the tissue is actually processed and visualized by pathologists, and it’s fundamentally difficult to get a sense for the three-dimensional structure of a tissue by looking only at two-dimensional slices.” Indeed, one can imagine more than a few scenarios where the constraint of looking at a few cross sections can be misleading (is it a head of broccoli, or a small tree?). The OTLS procedure—in which whole tissue specimens are treated with specific reagents to render them optically clear before being stained and imaged in 3D on a specially-equipped microscope—addresses many of these limitations, but in doing so inadvertently creates a new one: data overload.
Here, the story is picked up by Dr. Lindsey Erion Barner, a former graduate student in the Liu Lab and first-author of a recent publication in Modern Pathology describing her work. “Using OTLS for 3D pathology has a lot of advantages over traditional section-and-stain approaches, but it also means that pathologists now have to inspect an entire 3D tissue volume instead of a handful of cross sections per specimen,” Dr. Barner begins. Couple recent shortages of pathologists with ever-higher demand for their services, and you’ve got a classic big-data conundrum. “The lightbulb moment for us,” Barner continues, “was the realization that we could leverage recent advances in AI and deep learning to help pathologists navigate these dense 3D pathology datasets.” Collaborating with Dr. William Grady, a physician-scientist and expert in gastrointestinal cancers at Fred Hutch, the team set their sights on a specific problem known as Barrett’s esophagus (BE). BE is a medical condition in which the cells lining the lower esophagus become damaged due to persistent acid reflux. While not a malignancy itself, BE predisposes patients to developing esophageal cancer and is therefore closely monitored by repeated biopsies/histology throughout a patient’s life—the perfect test case for a streamlined, AI-assisted pathology pipeline.
First, the team enlisted the help of pathologist Dr. Deepti Reddi at the UW School of Medicine to carefully annotate neoplastic (potentially cancerous) regions in a collection of 2D-cross sections from 3D-BE tissue specimens obtained from the Gastrointestinal Center for Analytic Research and Exploratory Science at the UW. Then, Dr. Barner and colleagues used this annotated data to train an AI algorithm which employs a two-pronged approach: first, a specific type of neural network called a residual network, or Resnet, identifies neoplastic regions within each 2D slice of the tissue volume. Then, a different type of machine learning algorithm called a random forest classifier (RFC) uses this information to identify the top three 2D slices (ranked by ‘probability of containing neoplastic tissue’) for pathologist review.
Drs. Barner and Liu are quick to point out the advantages of their approach, which they dub ‘AI-triaged’ 3D pathology: “Importantly, we’re not giving the AI full license to make medical decisions here—rather, we’re combining the throughput of AI with the expertise of a pathologist to form a mutually-beneficial relationship.” Beneficial, indeed: in a head-to-head trial, AI-triaged 3D pathology identified esophageal neoplasias which were missed using the traditional approach, all while reducing the workload per specimen for the reviewing pathologist. Finally, the team was quick to acknowledge the support of their colleagues at Fred Hutch and UW as well as the Cancer Consortium which enabled this proof-of-principle for human-AI interaction in the Big Data era. As Dr. Liu puts it, “we really couldn’t have done this work without outstanding collaborators, and we look forward to future collaborations as we improve upon this technology and apply it to other cancer types and diseases.”
The spotlighted research was funded by the National Institutes of Health, the National Science Foundation, the Prevent Cancer Foundation, the Cottrell Family Fund, the Evergreen Fund, and the Listwin Foundation.
Fred Hutch/University of Washington/Seattle Children’s Cancer Consortium members Drs. Jonathan Liu and William Grady contributed to this study.
Erion Barner, L. A., Gao, G., Reddi, D. M., Lan, L., Burke, W., Mahmood, F., Grady, W. M., & Liu, J. T. C. (2023). AI-Triaged 3D Pathology to Improve Detection of Esophageal Neoplasia While Reducing Pathologist Workloads. Modern Pathology, 36(12), 100322. https://doi.org/10.1016/j.modpat.2023.100322