ArXiv TLDR

Approaching human parity in the quality of automated organoid image segmentation

🐦 Tweet
2605.03053

Chase Cartwright, Gongbo Guo, Sai Teja Pusuluri, Christopher N. Mayhew, Mark Hester + 1 more

cs.CVcond-mat.softq-bio.QM

TLDR

This paper introduces a composite AI method combining SAM with a domain-specific tool to achieve near human-level accuracy in automated organoid image segmentation.

Key contributions

  • Develops a composite method for automated organoid image segmentation using SAM and a specialized tool.
  • Evaluates the new method against existing tools and manual segmentation on iPSC-derived spheroids.
  • Demonstrates that the composite method provides consistent and accurate results, outperforming existing tools.
  • Achieves segmentation accuracy comparable to human inter-observer variability, approaching human parity.

Why it matters

Accurate organoid segmentation is crucial for studying disease and developing treatments. This method significantly improves automation, reducing manual effort and enabling more robust, high-throughput analysis of organoid development. It sets a new benchmark for automated analysis in this critical research area.

Original Abstract

Organoids are complex, three dimensional, self-organizing cell cultures which manifest organ-like features and represent a powerful platform for studying human disease and developing treatment options. Organoid development is characterized by dynamic morphological and cellular organization, which mimic some aspects of organ development. To study these rapid changes over the course of organoid development, advanced imaging and analytical tools are critical to accurately monitor the trajectory of organoid growth and investigate disease processes. In this work, we focus on computer vision and machine learning techniques to automatically measure the size and shape of developing spheroids derived from pluripotent stem cells (iPSCs), which are typically the starting material for generating organoid cultures. To facilitate this task, we introduce a composite method that combines the Segment Anything Model (SAM), a general-purpose foundation model, with an existing domain-specific tool. This composite method is evaluated together with several existing tools by testing them on organoid image data and comparing with the results of manual image segmentation. We find that no single existing tool is able to segment the test images with sufficient accuracy across all test conditions, but the newly introduced composite method produces consistent and accurate results for all but a very small fraction of the most challenging images. Finally, we compare the accuracy of this method to the variability between manual segmentations by independent annotators (inter-observer variability) and find that by one measure it performs at the level of inter-observer variability and by others it performs very close to it.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.