Research Unit DKFZ

As digitalization leads to increasing volumes of data, advancing reliable AI-based image analysis is key to creating scientific and societal impact.
Helmholtz Imaging Research Unit DKFZ

Image Analysis & Validation (DKFZ): we focus on the downstream stages of the imaging pipeline – specifically, the development, validation, and deployment of advanced AI-based methods for automated image analysis. These phases are critical for extracting high-level, domain-relevant information from complex imaging data. Our work addresses algorithmic challenges in interpreting, quantifying, and validating image-derived information, with the overarching goal of enabling trustworthy, robust, and generalizable AI-driven analysis. 

imaging pipeline, last two stages of the imaging workflow

We conduct interdisciplinary research in three core areas:

(1) automated image analysis, where we work to enhance the robustness and generalizability of AI methods across diverse and imperfect real-world datasets;

(2) human-machine interaction, which aims to integrate humans as active participants in AI development and deployment to ensure transparency, trust, and safety; and

(3) validation and benchmarking, where we lead efforts to develop and standardize validation practices and develop tools that enable reproducible and transparent assessments of algorithm performance.

By addressing these pivotal stages in the imaging pipeline, our mission is to advance the reliability and impact of AI-based image analysis across scientific and societal applications.


Publications

4725570 HI Science Unit DKFZ 1 https://helmholtz-imaging.de/apa-bold-title.csl 50 date 147 https://helmholtz-imaging.de/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A50%2C%22request_next%22%3A50%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228QPTGZ82%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Godau%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGodau%2C%20P.%2C%20Kalinowski%2C%20P.%2C%20Christodoulou%2C%20E.%2C%20Reinke%2C%20A.%2C%20Tizabi%2C%20M.%2C%20Ferrer%2C%20L.%2C%20J%26%23xE4%3Bger%2C%20P.%2C%20%26amp%3B%20Maier-Hein%2C%20L.%20%282025%29.%20%26lt%3Bb%26gt%3BNavigating%20prevalence%20shifts%20in%20image%20analysis%20algorithm%20deployment%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BMedical%20Image%20Analysis%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B102%26lt%3B%5C%2Fi%26gt%3B%2C%20103504.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.media.2025.103504%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.media.2025.103504%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Navigating%20prevalence%20shifts%20in%20image%20analysis%20algorithm%20deployment%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%22%2C%22lastName%22%3A%22Godau%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Piotr%22%2C%22lastName%22%3A%22Kalinowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Evangelia%22%2C%22lastName%22%3A%22Christodoulou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minu%22%2C%22lastName%22%3A%22Tizabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luciana%22%2C%22lastName%22%3A%22Ferrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Domain%20gaps%20are%20significant%20obstacles%20to%20the%20clinical%20implementation%20of%20machine%20learning%20%28ML%29%20solutions%20for%20medical%20image%20analysis.%20Although%20current%20research%20emphasizes%20new%20training%20methods%20and%20network%20architectures%2C%20the%20specific%20impact%20of%20prevalence%20shifts%20on%20algorithms%20in%20real-world%20applications%20is%20often%20overlooked.%20Differences%20in%20class%20frequencies%20between%20development%20and%20deployment%20data%20are%20crucial%2C%20particularly%20for%20the%20widespread%20adoption%20of%20artificial%20intelligence%20%28AI%29%2C%20as%20disease%20prevalence%20can%20vary%20greatly%20across%20different%20times%20and%20locations.%20Our%20contribution%20is%20threefold.%20Based%20on%20a%20diverse%20set%20of%2030%20medical%20classification%20tasks%20%281%29%20we%20demonstrate%20that%20lack%20of%20prevalence%20shift%20handling%20can%20have%20severe%20consequences%20on%20the%20quality%20of%20calibration%2C%20decision%20threshold%2C%20and%20performance%20assessment.%20Furthermore%2C%20%282%29%20we%20show%20that%20prevalences%20can%20be%20accurately%20and%20reliably%20estimated%20in%20a%20data-driven%20manner.%20Finally%2C%20%283%29%20we%20propose%20a%20new%20workflow%20for%20prevalence-aware%20image%20classification%20that%20uses%20estimated%20deployment%20prevalences%20to%20adjust%20a%20trained%20classifier%20to%20a%20new%20environment%2C%20without%20requiring%20additional%20annotated%20deployment%20data.%20Comprehensive%20experiments%20indicate%20that%20our%20proposed%20approach%20could%20contribute%20to%20generating%20better%20classifier%20decisions%20and%20more%20reliable%20performance%20estimates%20compared%20to%20current%20practice.%22%2C%22date%22%3A%222025-05-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.media.2025.103504%22%2C%22ISSN%22%3A%221361-8415%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1361841525000520%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-04T13%3A35%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22HNQ7XIBJ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fischer%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-28%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFischer%2C%20M.%2C%20Neher%2C%20P.%2C%20Sch%26%23xFC%3Bffler%2C%20P.%2C%20Ziegler%2C%20S.%2C%20Xiao%2C%20S.%2C%20Peretzke%2C%20R.%2C%20Clunie%2C%20D.%2C%20Ulrich%2C%20C.%2C%20Baumgartner%2C%20M.%2C%20Muckenhuber%2C%20A.%2C%20Almeida%2C%20S.%20D.%2C%20G%26%23x151%3Btz%2C%20M.%2C%20Kleesiek%2C%20J.%2C%20Nolden%2C%20M.%2C%20Braren%2C%20R.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282025%29.%20%26lt%3Bb%26gt%3BUnlocking%20the%20potential%20of%20digital%20pathology%3A%20Novel%20baselines%20for%20compression%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BJournal%20of%20Pathology%20Informatics%26lt%3B%5C%2Fi%26gt%3B%2C%20100421.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.jpi.2025.100421%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.jpi.2025.100421%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Unlocking%20the%20potential%20of%20digital%20pathology%3A%20Novel%20baselines%20for%20compression%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Fischer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Neher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Sch%5Cu00fcffler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuhan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robin%22%2C%22lastName%22%3A%22Peretzke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Clunie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Muckenhuber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%20Dias%22%2C%22lastName%22%3A%22Almeida%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22G%5Cu0151tz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kleesiek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rickmer%22%2C%22lastName%22%3A%22Braren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20pathology%20offers%20a%20groundbreaking%20opportunity%20to%20transform%20clinical%20practice%20in%20histopathological%20image%20analysis%2C%20yet%20faces%20a%20significant%20hurdle%3A%20the%20substantial%20file%20sizes%20of%20pathological%20whole%20slide%20images%20%28WSIs%29.%20Whereas%20current%20digital%20pathology%20solutions%20rely%20on%20lossy%20JPEG%20compression%20to%20address%20this%20issue%2C%20lossy%20compression%20can%20introduce%20color%20and%20texture%20disparities%2C%20potentially%20impacting%20clinical%20decision-making.%20Whereas%20prior%20research%20addresses%20perceptual%20image%20quality%20and%20downstream%20performance%20independently%20of%20each%20other%2C%20we%20jointly%20evaluate%20compression%20schemes%20for%20perceptual%20and%20downstream%20task%20quality%20on%20four%20different%20datasets.%20In%20addition%2C%20we%20collect%20an%20initially%20uncompressed%20dataset%20for%20an%20unbiased%20perceptual%20evaluation%20of%20compression%20schemes.%20Our%20results%20show%20that%20deep%20learning%20models%20fine-tuned%20for%20perceptual%20quality%20outperform%20conventional%20compression%20schemes%20like%20JPEG-XL%20or%20WebP%20for%20further%20compression%20of%20WSI.%20However%2C%20they%20exhibit%20a%20significant%20bias%20towards%20the%20compression%20artifacts%20present%20in%20the%20training%20data%20and%20struggle%20to%20generalize%20across%20various%20compression%20schemes.%20We%20introduce%20a%20novel%20evaluation%20metric%20based%20on%20feature%20similarity%20between%20original%20files%20and%20compressed%20files%20that%20aligns%20very%20well%20with%20the%20actual%20downstream%20performance%20on%20the%20compressed%20WSI.%20Our%20metric%20allows%20for%20a%20general%20and%20standardized%20evaluation%20of%20lossy%20compression%20schemes%20and%20mitigates%20the%20requirement%20to%20independently%20assess%20different%20downstream%20tasks.%20Our%20study%20provides%20novel%20insights%20for%20the%20assessment%20of%20lossy%20compression%20schemes%20for%20WSI%20and%20encourages%20a%20unified%20evaluation%20of%20lossy%20compression%20schemes%20to%20accelerate%20the%20clinical%20uptake%20of%20digital%20pathology.%22%2C%22date%22%3A%222025-01-28%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jpi.2025.100421%22%2C%22ISSN%22%3A%222153-3539%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2153353925000033%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A46%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22RVZ56NDQ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bassi%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-20%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBassi%2C%20P.%20R.%20A.%20S.%2C%20Li%2C%20W.%2C%20Tang%2C%20Y.%2C%20Isensee%2C%20F.%2C%20Wang%2C%20Z.%2C%20Chen%2C%20J.%2C%20Chou%2C%20Y.-C.%2C%20Kirchhoff%2C%20Y.%2C%20Rokuss%2C%20M.%2C%20Huang%2C%20Z.%2C%20Ye%2C%20J.%2C%20He%2C%20J.%2C%20Wald%2C%20T.%2C%20Ulrich%2C%20C.%2C%20Baumgartner%2C%20M.%2C%20Roy%2C%20S.%2C%20Maier-Hein%2C%20K.%20H.%2C%20Jaeger%2C%20P.%2C%20Ye%2C%20Y.%2C%20%26%23x2026%3B%20Zhou%2C%20Z.%20%282025%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BTouchstone%20Benchmark%3A%20Are%20We%20on%20the%20Right%20Way%20for%20Evaluating%20AI%20Algorithms%20for%20Medical%20Segmentation%3F%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2411.03670%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.03670%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.03670%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Touchstone%20Benchmark%3A%20Are%20We%20on%20the%20Right%20Way%20for%20Evaluating%20AI%20Algorithms%20for%20Medical%20Segmentation%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pedro%20R.%20A.%20S.%22%2C%22lastName%22%3A%22Bassi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenxuan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yucheng%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zifu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jieneng%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu-Cheng%22%2C%22lastName%22%3A%22Chou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yannick%22%2C%22lastName%22%3A%22Kirchhoff%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Rokuss%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyan%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jin%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junjun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saikat%22%2C%22lastName%22%3A%22Roy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiwen%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yutong%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianpeng%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyang%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhaohu%22%2C%22lastName%22%3A%22Xing%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lei%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yousef%22%2C%22lastName%22%3A%22Sadegheih%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Afshin%22%2C%22lastName%22%3A%22Bozorgpour%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pratibha%22%2C%22lastName%22%3A%22Kumari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Reza%22%2C%22lastName%22%3A%22Azad%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dorit%22%2C%22lastName%22%3A%22Merhof%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ting%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuxin%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fan%22%2C%22lastName%22%3A%22Bai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tiejun%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haonan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomeng%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanxue%22%2C%22lastName%22%3A%22Gu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haoyu%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jichen%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maciej%20A.%22%2C%22lastName%22%3A%22Mazurowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saumya%22%2C%22lastName%22%3A%22Gupta%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linshan%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Zhuang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Holger%22%2C%22lastName%22%3A%22Roth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daguang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%20B.%22%2C%22lastName%22%3A%22Blaschko%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sergio%22%2C%22lastName%22%3A%22Decherchi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrea%22%2C%22lastName%22%3A%22Cavalli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alan%20L.%22%2C%22lastName%22%3A%22Yuille%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zongwei%22%2C%22lastName%22%3A%22Zhou%22%7D%5D%2C%22abstractNote%22%3A%22How%20can%20we%20test%20AI%20performance%3F%20This%20question%20seems%20trivial%2C%20but%20it%20isn%26%23039%3Bt.%20Standard%20benchmarks%20often%20have%20problems%20such%20as%20in-distribution%20and%20small-size%20test%20sets%2C%20oversimplified%20metrics%2C%20unfair%20comparisons%2C%20and%20short-term%20outcome%20pressure.%20As%20a%20consequence%2C%20good%20performance%20on%20standard%20benchmarks%20does%20not%20guarantee%20success%20in%20real-world%20scenarios.%20To%20address%20these%20problems%2C%20we%20present%20Touchstone%2C%20a%20large-scale%20collaborative%20segmentation%20benchmark%20of%209%20types%20of%20abdominal%20organs.%20This%20benchmark%20is%20based%20on%205%2C195%20training%20CT%20scans%20from%2076%20hospitals%20around%20the%20world%20and%205%2C903%20testing%20CT%20scans%20from%2011%20additional%20hospitals.%20This%20diverse%20test%20set%20enhances%20the%20statistical%20significance%20of%20benchmark%20results%20and%20rigorously%20evaluates%20AI%20algorithms%20across%20various%20out-of-distribution%20scenarios.%20We%20invited%2014%20inventors%20of%2019%20AI%20algorithms%20to%20train%20their%20algorithms%2C%20while%20our%20team%2C%20as%20a%20third%20party%2C%20independently%20evaluated%20these%20algorithms%20on%20three%20test%20sets.%20In%20addition%2C%20we%20also%20evaluated%20pre-existing%20AI%20frameworks--which%2C%20differing%20from%20algorithms%2C%20are%20more%20flexible%20and%20can%20support%20different%20algorithms--including%20MONAI%20from%20NVIDIA%2C%20nnU-Net%20from%20DKFZ%2C%20and%20numerous%20other%20open-source%20frameworks.%20We%20are%20committed%20to%20expanding%20this%20benchmark%20to%20encourage%20more%20innovation%20of%20AI%20algorithms%20for%20the%20medical%20domain.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2411.03670%22%2C%22date%22%3A%222025-01-20%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2411.03670%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2411.03670%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A26%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22HKXZXQ7T%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klein%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-02%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlein%2C%20L.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Schlegel%2C%20U.%2C%20Bungert%2C%20T.%20J.%2C%20El-Assady%2C%20M.%2C%20%26amp%3B%20J%26%23xE4%3Bger%2C%20P.%20F.%20%282025%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BNavigating%20the%20Maze%20of%20Explainable%20AI%3A%20A%20Systematic%20Approach%20to%20Evaluating%20Methods%20and%20Metrics%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2409.16756%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2409.16756%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2409.16756%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Navigating%20the%20Maze%20of%20Explainable%20AI%3A%20A%20Systematic%20Approach%20to%20Evaluating%20Methods%20and%20Metrics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Udo%22%2C%22lastName%22%3A%22Schlegel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Till%20J.%22%2C%22lastName%22%3A%22Bungert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mennatallah%22%2C%22lastName%22%3A%22El-Assady%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%5D%2C%22abstractNote%22%3A%22Explainable%20AI%20%28XAI%29%20is%20a%20rapidly%20growing%20domain%20with%20a%20myriad%20of%20proposed%20methods%20as%20well%20as%20metrics%20aiming%20to%20evaluate%20their%20efficacy.%20However%2C%20current%20studies%20are%20often%20of%20limited%20scope%2C%20examining%20only%20a%20handful%20of%20XAI%20methods%20and%20ignoring%20underlying%20design%20parameters%20for%20performance%2C%20such%20as%20the%20model%20architecture%20or%20the%20nature%20of%20input%20data.%20Moreover%2C%20they%20often%20rely%20on%20one%20or%20a%20few%20metrics%20and%20neglect%20thorough%20validation%2C%20increasing%20the%20risk%20of%20selection%20bias%20and%20ignoring%20discrepancies%20among%20metrics.%20These%20shortcomings%20leave%20practitioners%20confused%20about%20which%20method%20to%20choose%20for%20their%20problem.%20In%20response%2C%20we%20introduce%20LATEC%2C%20a%20large-scale%20benchmark%20that%20critically%20evaluates%2017%20prominent%20XAI%20methods%20using%2020%20distinct%20metrics.%20We%20systematically%20incorporate%20vital%20design%20parameters%20like%20varied%20architectures%20and%20diverse%20input%20modalities%2C%20resulting%20in%207%2C560%20examined%20combinations.%20Through%20LATEC%2C%20we%20showcase%20the%20high%20risk%20of%20conflicting%20metrics%20leading%20to%20unreliable%20rankings%20and%20consequently%20propose%20a%20more%20robust%20evaluation%20scheme.%20Further%2C%20we%20comprehensively%20evaluate%20various%20XAI%20methods%20to%20assist%20practitioners%20in%20selecting%20appropriate%20methods%20aligning%20with%20their%20needs.%20Curiously%2C%20the%20emerging%20top-performing%20method%2C%20Expected%20Gradients%2C%20is%20not%20examined%20in%20any%20relevant%20related%20study.%20LATEC%20reinforces%20its%20role%20in%20future%20XAI%20research%20by%20publicly%20releasing%20all%20326k%20saliency%20maps%20and%20378k%20metric%20scores%20as%20a%20%28meta-%29evaluation%20dataset.%20The%20benchmark%20is%20hosted%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FIML-DKFZ%5C%2Flatec.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2409.16756%22%2C%22date%22%3A%222025-01-02%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2409.16756%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2409.16756%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A12%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22GPCM9X32%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A9737705%2C%22username%22%3A%22Niebelsa%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fniebelsa%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Adler%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAdler%2C%20T.%20J.%2C%20N%26%23xF6%3Blke%2C%20J.-H.%2C%20Reinke%2C%20A.%2C%20Tizabi%2C%20M.%20D.%2C%20Gruber%2C%20S.%2C%20Trofimova%2C%20D.%2C%20Ardizzone%2C%20L.%2C%20Jaeger%2C%20P.%20F.%2C%20Buettner%2C%20F.%2C%20K%26%23xF6%3Bthe%2C%20U.%2C%20%26amp%3B%20Maier-Hein%2C%20L.%20%282025%29.%20%26lt%3Bb%26gt%3BApplication-driven%20validation%20of%20posteriors%20in%20inverse%20problems%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BMedical%20Image%20Analysis%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B101%26lt%3B%5C%2Fi%26gt%3B%2C%20103474.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.media.2025.103474%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.media.2025.103474%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application-driven%20validation%20of%20posteriors%20in%20inverse%20problems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tim%20J.%22%2C%22lastName%22%3A%22Adler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan-Hinrich%22%2C%22lastName%22%3A%22N%5Cu00f6lke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minu%20Dietlinde%22%2C%22lastName%22%3A%22Tizabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Gruber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dasha%22%2C%22lastName%22%3A%22Trofimova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lynton%22%2C%22lastName%22%3A%22Ardizzone%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Buettner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ullrich%22%2C%22lastName%22%3A%22K%5Cu00f6the%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%2204%5C%2F2025%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.media.2025.103474%22%2C%22ISSN%22%3A%2213618415%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS1361841525000222%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T07%3A43%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22J2PF6V6D%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zimmerer%20and%20Maier-Hein%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZimmerer%2C%20D.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282025%29.%20%26lt%3Bb%26gt%3BBeyond%20Heatmaps%3A%20A%20Comparative%20Analysis%20of%26%23xA0%3BMetrics%20for%26%23xA0%3BAnomaly%20Localization%20in%26%23xA0%3BMedical%20Images%26lt%3B%5C%2Fb%26gt%3B.%20In%20C.%20H.%20Sudre%2C%20R.%20Mehta%2C%20C.%20Ouyang%2C%20C.%20Qin%2C%20M.%20Rakic%2C%20%26amp%3B%20W.%20M.%20Wells%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BUncertainty%20for%20Safe%20Utilization%20of%20Machine%20Learning%20in%20Medical%20Imaging%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%20138%26%23x2013%3B148%29.%20Springer%20Nature%20Switzerland.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73158-7_13%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73158-7_13%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Beyond%20Heatmaps%3A%20A%20Comparative%20Analysis%20of%5Cu00a0Metrics%20for%5Cu00a0Anomaly%20Localization%20in%5Cu00a0Medical%20Images%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Zimmerer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Carole%20H.%22%2C%22lastName%22%3A%22Sudre%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Raghav%22%2C%22lastName%22%3A%22Mehta%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Ouyang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Chen%22%2C%22lastName%22%3A%22Qin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Marianne%22%2C%22lastName%22%3A%22Rakic%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22William%20M.%22%2C%22lastName%22%3A%22Wells%22%7D%5D%2C%22abstractNote%22%3A%22An%20assumption-free%2C%20disease-agnostic%20pathology%20detector%20and%20segmentor%20is%20often%20regarded%20as%20one%20of%20the%20holy%20grails%20in%20medical%20image%20analysis.%20Building%20on%20this%20concept%2C%20un-%20or%20weakly%20supervised%20anomaly%20localization%20approaches%20have%20gained%20popularity.%5Cu00a0These%20methods%20aim%20to%20model%20normal%20or%20healthy%20samples%20using%20data%20and%5Cu00a0then%20detect%20deviations%20%28i.e.%2C%20abnormalities%29.%20However%2C%20as%20this%20is%5Cu00a0an%20emerging%20field%20situated%20between%20image%20segmentation%5Cu00a0and%20out-of-distribution%20detection%2C%20most%20approaches%20have%20adapted%5Cu00a0their%20evaluation%20setups%20and%20metrics%20from%20either%20of%20these%20areas.%20Consequently%2C%20they%20may%20have%20overlooked%20peculiarities%20inherent%5Cu00a0to%20anomaly%20localization.%20In%20this%20paper%2C%20we%20revisit%20the%20anomaly%20localization%20setup%2C%20analyze%20commonly%20used%20metrics%2C%20introduce%20alternative%20metrics%20inspired%20by%20instance%20segmentation%2C%20and%20compare%20these%20metrics%20across%20various%20settings%20and%20algorithms.%20We%20contend%20that%20the%20choice%20of%20metric%20is%20use-case%20dependent%2C%20but%5Cu00a0the%20SoftInstanceIoU%20and%20other%20object-based%20metrics%20show%20significant%20promise%20for%20future%20applications.%22%2C%22date%22%3A%222025%22%2C%22proceedingsTitle%22%3A%22Uncertainty%20for%20Safe%20Utilization%20of%20Machine%20Learning%20in%20Medical%20Imaging%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-73158-7_13%22%2C%22ISBN%22%3A%22978-3-031-73158-7%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A47%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22V8VBFTFG%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wald%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-22%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWald%2C%20T.%2C%20Ulrich%2C%20C.%2C%20Suprijadi%2C%20J.%2C%20Nohel%2C%20M.%2C%20Peretzke%2C%20R.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20H.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BAn%20OpenMind%20for%203D%20medical%20vision%20self-supervised%20learning%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2412.17041%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2412.17041%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2412.17041%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22An%20OpenMind%20for%203D%20medical%20vision%20self-supervised%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonathan%22%2C%22lastName%22%3A%22Suprijadi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michal%22%2C%22lastName%22%3A%22Nohel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robin%22%2C%22lastName%22%3A%22Peretzke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22The%20field%20of%203D%20medical%20vision%20self-supervised%20learning%20lacks%20consistency%20and%20standardization.%20While%20many%20methods%20have%20been%20developed%20it%20is%20impossible%20to%20identify%20the%20current%20state-of-the-art%2C%20due%20to%20i%29%20varying%20and%20small%20pre-training%20datasets%2C%20ii%29%20varying%20architectures%2C%20and%20iii%29%20being%20evaluated%20on%20differing%20downstream%20datasets.%20In%20this%20paper%20we%20bring%20clarity%20to%20this%20field%20and%20lay%20the%20foundation%20for%20further%20method%20advancements%3A%20We%20a%29%20publish%20the%20largest%20publicly%20available%20pre-training%20dataset%20comprising%20114k%203D%20brain%20MRI%20volumes%20and%20b%29%20benchmark%20existing%20SSL%20methods%20under%20common%20architectures%20and%20c%29%20provide%20the%20code%20of%20our%20framework%20publicly%20to%20facilitate%20rapid%20adoption%20and%20reproduction.%20This%20pre-print%20%5C%5Ctextit%7Bonly%20describes%7D%20the%20dataset%20contribution%20%28a%29%3B%20Data%2C%20benchmark%2C%20and%20codebase%20will%20be%20made%20available%20shortly.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2412.17041%22%2C%22date%22%3A%222024-12-22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2412.17041%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.17041%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T07%3A42%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22JZKRTJHB%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wald%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-02%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWald%2C%20T.%2C%20Ulrich%2C%20C.%2C%20Lukyanenko%2C%20S.%2C%20Goncharov%2C%20A.%2C%20Paderno%2C%20A.%2C%20Maerkisch%2C%20L.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BRevisiting%20MAE%20pre-training%20for%203D%20medical%20image%20segmentation%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2410.23132%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2410.23132%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2410.23132%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Revisiting%20MAE%20pre-training%20for%203D%20medical%20image%20segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stanislav%22%2C%22lastName%22%3A%22Lukyanenko%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrei%22%2C%22lastName%22%3A%22Goncharov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alberto%22%2C%22lastName%22%3A%22Paderno%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leander%22%2C%22lastName%22%3A%22Maerkisch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Self-Supervised%20Learning%20%28SSL%29%20presents%20an%20exciting%20opportunity%20to%20unlock%20the%20potential%20of%20vast%2C%20untapped%20clinical%20datasets%2C%20for%20various%20downstream%20applications%20that%20suffer%20from%20the%20scarcity%20of%20labeled%20data.%20While%20SSL%20has%20revolutionized%20fields%20like%20natural%20language%20processing%20and%20computer%20vision%2C%20its%20adoption%20in%203D%20medical%20image%20computing%20has%20been%20limited%20by%20three%20key%20pitfalls%3A%20Small%20pre-training%20dataset%20sizes%2C%20architectures%20inadequate%20for%203D%20medical%20image%20analysis%2C%20and%20insufficient%20evaluation%20practices.%20In%20this%20paper%2C%20we%20address%20these%20issues%20by%20i%29%20leveraging%20a%20large-scale%20dataset%20of%2039k%203D%20brain%20MRI%20volumes%20and%20ii%29%20using%20a%20Residual%20Encoder%20U-Net%20architecture%20within%20the%20state-of-the-art%20nnU-Net%20framework.%20iii%29%20A%20robust%20development%20framework%2C%20incorporating%205%20development%20and%208%20testing%20brain%20MRI%20segmentation%20datasets%2C%20allowed%20performance-driven%20design%20decisions%20to%20optimize%20the%20simple%20concept%20of%20Masked%20Auto%20Encoders%20%28MAEs%29%20for%203D%20CNNs.%20The%20resulting%20model%20not%20only%20surpasses%20previous%20SSL%20methods%20but%20also%20outperforms%20the%20strong%20nnU-Net%20baseline%20by%20an%20average%20of%20approximately%203%20Dice%20points%20setting%20a%20new%20state-of-the-art.%20Our%20code%20and%20models%20are%20made%20available%20here.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2410.23132%22%2C%22date%22%3A%222024-12-02%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2410.23132%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2410.23132%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A39%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22RZKLTHUG%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ulrich%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-29%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUlrich%2C%20C.%2C%20Wald%2C%20T.%2C%20Tempus%2C%20E.%2C%20Rokuss%2C%20M.%2C%20Jaeger%2C%20P.%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BRadioActive%3A%203D%20Radiological%20Interactive%20Segmentation%20Benchmark%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2411.07885%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.07885%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.07885%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22RadioActive%3A%203D%20Radiological%20Interactive%20Segmentation%20Benchmark%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Emily%22%2C%22lastName%22%3A%22Tempus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Rokuss%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Current%20interactive%20segmentation%20approaches%2C%20inspired%20by%20the%20success%20of%20META%26%23039%3Bs%20Segment%20Anything%20model%2C%20have%20achieved%20notable%20advancements%2C%20however%2C%20they%20come%20with%20substantial%20limitations%20that%20hinder%20their%20practical%20application%20in%203D%20radiological%20scenarios.%20These%20include%20unrealistic%20human%20interaction%20requirements%2C%20such%20as%20slice-by-slice%20operations%20for%202D%20models%20on%203D%20data%2C%20a%20lack%20of%20iterative%20interactive%20refinement%2C%20and%20insufficient%20evaluation%20experiments.%20These%20shortcomings%20prevent%20accurate%20assessment%20of%20model%20performance%20and%20lead%20to%20inconsistent%20outcomes%20across%20studies.%20The%20RadioActive%20benchmark%20overcomes%20these%20challenges%20by%20offering%20a%20comprehensive%20and%20reproducible%20evaluation%20of%20interactive%20segmentation%20methods%20in%20realistic%2C%20clinically%20relevant%20scenarios.%20It%20includes%20diverse%20datasets%2C%20target%20structures%2C%20and%20interactive%20segmentation%20methods%2C%20and%20provides%20a%20flexible%2C%20extendable%20codebase%20that%20allows%20seamless%20integration%20of%20new%20models%20and%20prompting%20strategies.%20We%20also%20introduce%20advanced%20prompting%20techniques%20to%20enable%202D%20models%20on%203D%20data%20by%20reducing%20the%20needed%20number%20of%20interaction%20steps%2C%20enabling%20a%20fair%20comparison.%20We%20show%20that%20surprisingly%20the%20performance%20of%20slice-wise%20prompted%20approaches%20can%20match%20native%203D%20methods%2C%20despite%20the%20domain%20gap.%20Our%20findings%20challenge%20the%20current%20literature%20and%20highlight%20that%20models%20not%20specifically%20trained%20on%20medical%20data%20can%20outperform%20the%20current%20specialized%20medical%20methods.%20By%20open-sourcing%20RadioActive%2C%20we%20invite%20the%20research%20community%20to%20integrate%20their%20models%20and%20prompting%20techniques%2C%20ensuring%20continuous%20and%20transparent%20evaluation%20of%20interactive%20segmentation%20models%20in%203D%20medical%20imaging.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2411.07885%22%2C%22date%22%3A%222024-11-29%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2411.07885%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2411.07885%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A42%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22TIA33XIZ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kahl%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-29%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKahl%2C%20K.-C.%2C%20Erkan%2C%20S.%2C%20Traub%2C%20J.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Maier-Hein%2C%20K.%2C%20Maier-Hein%2C%20L.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BSURE-VQA%3A%20Systematic%20Understanding%20of%20Robustness%20Evaluation%20in%20Medical%20VQA%20Tasks%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2411.19688%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.19688%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2411.19688%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22SURE-VQA%3A%20Systematic%20Understanding%20of%20Robustness%20Evaluation%20in%20Medical%20VQA%20Tasks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kim-Celine%22%2C%22lastName%22%3A%22Kahl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Selen%22%2C%22lastName%22%3A%22Erkan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeremias%22%2C%22lastName%22%3A%22Traub%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22Vision-Language%20Models%20%28VLMs%29%20have%20great%20potential%20in%20medical%20tasks%2C%20like%20Visual%20Question%20Answering%20%28VQA%29%2C%20where%20they%20could%20act%20as%20interactive%20assistants%20for%20both%20patients%20and%20clinicians.%20Yet%20their%20robustness%20to%20distribution%20shifts%20on%20unseen%20data%20remains%20a%20critical%20concern%20for%20safe%20deployment.%20Evaluating%20such%20robustness%20requires%20a%20controlled%20experimental%20setup%20that%20allows%20for%20systematic%20insights%20into%20the%20model%26%23039%3Bs%20behavior.%20However%2C%20we%20demonstrate%20that%20current%20setups%20fail%20to%20offer%20sufficiently%20thorough%20evaluations%2C%20limiting%20their%20ability%20to%20accurately%20assess%20model%20robustness.%20To%20address%20this%20gap%2C%20our%20work%20introduces%20a%20novel%20framework%2C%20called%20SURE-VQA%2C%20centered%20around%20three%20key%20requirements%20to%20overcome%20the%20current%20pitfalls%20and%20systematically%20analyze%20the%20robustness%20of%20VLMs%3A%201%29%20Since%20robustness%20on%20synthetic%20shifts%20does%20not%20necessarily%20translate%20to%20real-world%20shifts%2C%20robustness%20should%20be%20measured%20on%20real-world%20shifts%20that%20are%20inherent%20to%20the%20VQA%20data%3B%202%29%20Traditional%20token-matching%20metrics%20often%20fail%20to%20capture%20underlying%20semantics%2C%20necessitating%20the%20use%20of%20large%20language%20models%20%28LLMs%29%20for%20more%20accurate%20semantic%20evaluation%3B%203%29%20Model%20performance%20often%20lacks%20interpretability%20due%20to%20missing%20sanity%20baselines%2C%20thus%20meaningful%20baselines%20should%20be%20reported%20that%20allow%20assessing%20the%20multimodal%20impact%20on%20the%20VLM.%20To%20demonstrate%20the%20relevance%20of%20this%20framework%2C%20we%20conduct%20a%20study%20on%20the%20robustness%20of%20various%20fine-tuning%20methods%20across%20three%20medical%20datasets%20with%20four%20different%20types%20of%20distribution%20shifts.%20Our%20study%20reveals%20several%20important%20findings%3A%201%29%20Sanity%20baselines%20that%20do%20not%20utilize%20image%20data%20can%20perform%20surprisingly%20well%3B%202%29%20We%20confirm%20LoRA%20as%20the%20best-performing%20PEFT%20method%3B%203%29%20No%20PEFT%20method%20consistently%20outperforms%20others%20in%20terms%20of%20robustness%20to%20shifts.%20Code%20is%20provided%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FIML-DKFZ%5C%2Fsure-vqa.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2411.19688%22%2C%22date%22%3A%222024-11-29%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2411.19688%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2411.19688%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A15%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22ETUKTI3R%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cimini%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-30%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCimini%2C%20B.%20A.%2C%20Bankhead%2C%20P.%2C%20D%26%23x2019%3BAntuono%2C%20R.%2C%20Fazeli%2C%20E.%2C%20Fernandez-Rodriguez%2C%20J.%2C%20Fuster-Barcel%26%23xF3%3B%2C%20C.%2C%20Haase%2C%20R.%2C%20Jambor%2C%20H.%20K.%2C%20Jones%2C%20M.%20L.%2C%20Jug%2C%20F.%2C%20Klemm%2C%20A.%20H.%2C%20Kreshuk%2C%20A.%2C%20Marcotti%2C%20S.%2C%20Martins%2C%20G.%20G.%2C%20McArdle%2C%20S.%2C%20Miura%2C%20K.%2C%20Mu%26%23xF1%3Boz-Barrutia%2C%20A.%2C%20Murphy%2C%20L.%20C.%2C%20Nelson%2C%20M.%20S.%2C%20%26%23x2026%3B%20Eliceiri%2C%20K.%20W.%20%282024%29.%20%26lt%3Bb%26gt%3BThe%20crucial%20role%20of%20bioimage%20analysts%20in%20scientific%20research%20and%20publication%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BJournal%20of%20Cell%20Science%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B137%26lt%3B%5C%2Fi%26gt%3B%2820%29%2C%20jcs262322.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1242%5C%2Fjcs.262322%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1242%5C%2Fjcs.262322%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20crucial%20role%20of%20bioimage%20analysts%20in%20scientific%20research%20and%20publication%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Beth%20A.%22%2C%22lastName%22%3A%22Cimini%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Bankhead%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rocco%22%2C%22lastName%22%3A%22D%27Antuono%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elnaz%22%2C%22lastName%22%3A%22Fazeli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julia%22%2C%22lastName%22%3A%22Fernandez-Rodriguez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Caterina%22%2C%22lastName%22%3A%22Fuster-Barcel%5Cu00f3%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Haase%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Helena%20Klara%22%2C%22lastName%22%3A%22Jambor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%20L.%22%2C%22lastName%22%3A%22Jones%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Jug%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%20H.%22%2C%22lastName%22%3A%22Klemm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Kreshuk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefania%22%2C%22lastName%22%3A%22Marcotti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%20G.%22%2C%22lastName%22%3A%22Martins%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sara%22%2C%22lastName%22%3A%22McArdle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kota%22%2C%22lastName%22%3A%22Miura%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arrate%22%2C%22lastName%22%3A%22Mu%5Cu00f1oz-Barrutia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%20C.%22%2C%22lastName%22%3A%22Murphy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20S.%22%2C%22lastName%22%3A%22Nelson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simon%20F.%22%2C%22lastName%22%3A%22N%5Cu00f8rrelykke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Perrine%22%2C%22lastName%22%3A%22Paul-Gilloteaux%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Pengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joanna%20W.%22%2C%22lastName%22%3A%22Pylv%5Cu00e4n%5Cu00e4inen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lior%22%2C%22lastName%22%3A%22Pytowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arianna%22%2C%22lastName%22%3A%22Ravera%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yousr%22%2C%22lastName%22%3A%22Rekik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Caterina%22%2C%22lastName%22%3A%22Strambio-De-Castillia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Th%5Cu00e9di%5Cu00e9%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Virginie%22%2C%22lastName%22%3A%22Uhlmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Umney%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Wiggins%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%20W.%22%2C%22lastName%22%3A%22Eliceiri%22%7D%5D%2C%22abstractNote%22%3A%22Bioimage%20analysis%20%28BIA%29%2C%20a%20crucial%20discipline%20in%20biological%20research%2C%20overcomes%20the%20limitations%20of%20subjective%20analysis%20in%20microscopy%20through%20the%20creation%20and%20application%20of%20quantitative%20and%20reproducible%20methods.%20The%20establishment%20of%20dedicated%20BIA%20support%20within%20academic%20institutions%20is%20vital%20to%20improving%20research%20quality%20and%20efficiency%20and%20can%20significantly%20advance%20scientific%20discovery.%20However%2C%20a%20lack%20of%20training%20resources%2C%20limited%20career%20paths%20and%20insufficient%20recognition%20of%20the%20contributions%20made%20by%20bioimage%20analysts%20prevent%20the%20full%20realization%20of%20this%20potential.%20This%20Perspective%20%5Cu2013%20the%20result%20of%20the%20recent%20The%20Company%20of%20Biologists%20Workshop%20%5Cu2018Effectively%20Communicating%20Bioimage%20Analysis%5Cu2019%2C%20which%20aimed%20to%20summarize%20the%20global%20BIA%20landscape%2C%20categorize%20obstacles%20and%20offer%20possible%20solutions%20%5Cu2013%20proposes%20strategies%20to%20bring%20about%20a%20cultural%20shift%20towards%20recognizing%20the%20value%20of%20BIA%20by%20standardizing%20tools%2C%20improving%20training%20and%20encouraging%20formal%20credit%20for%20contributions.%20We%20also%20advocate%20for%20increased%20funding%2C%20standardized%20practices%20and%20enhanced%20collaboration%2C%20and%20we%20conclude%20with%20a%20call%20to%20action%20for%20all%20stakeholders%20to%20join%20efforts%20in%20advancing%20BIA.%22%2C%22date%22%3A%222024-10-30%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1242%5C%2Fjcs.262322%22%2C%22ISSN%22%3A%220021-9533%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1242%5C%2Fjcs.262322%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A06%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22HKUF82MH%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klein%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-23%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlein%2C%20L.%2C%20Ziegler%2C%20S.%2C%20Gerst%2C%20F.%2C%20Morgenroth%2C%20Y.%2C%20Gotkowski%2C%20K.%2C%20Sch%26%23xF6%3Bniger%2C%20E.%2C%20Heni%2C%20M.%2C%20Kipke%2C%20N.%2C%20Friedland%2C%20D.%2C%20Seiler%2C%20A.%2C%20Geibelt%2C%20E.%2C%20Yamazaki%2C%20H.%2C%20H%26%23xE4%3Bring%2C%20H.%20U.%2C%20Wagner%2C%20S.%2C%20Nadalin%2C%20S.%2C%20K%26%23xF6%3Bnigsrainer%2C%20A.%2C%20Mihaljevi%26%23x107%3B%2C%20A.%20L.%2C%20Hartmann%2C%20D.%2C%20Fend%2C%20F.%2C%20%26%23x2026%3B%20Wagner%2C%20R.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BExplainable%20AI-based%20analysis%20of%20human%20pancreas%20sections%20identifies%20traits%20of%20type%202%20diabetes%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B.%20medRxiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F2024.10.23.24315937%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F2024.10.23.24315937%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Explainable%20AI-based%20analysis%20of%20human%20pancreas%20sections%20identifies%20traits%20of%20type%202%20diabetes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F.%22%2C%22lastName%22%3A%22Gerst%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%22%2C%22lastName%22%3A%22Morgenroth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22K.%22%2C%22lastName%22%3A%22Gotkowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%22%2C%22lastName%22%3A%22Sch%5Cu00f6niger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Heni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22N.%22%2C%22lastName%22%3A%22Kipke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Friedland%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Seiler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%22%2C%22lastName%22%3A%22Geibelt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22H.%22%2C%22lastName%22%3A%22Yamazaki%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22H.%20U.%22%2C%22lastName%22%3A%22H%5Cu00e4ring%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Wagner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Nadalin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22K%5Cu00f6nigsrainer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20L.%22%2C%22lastName%22%3A%22Mihaljevi%5Cu0107%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Hartmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F.%22%2C%22lastName%22%3A%22Fend%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Aust%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Weitz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jumpertz-von%20Schwartzenberg%22%2C%22lastName%22%3A%22R%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Distler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22K.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20L.%22%2C%22lastName%22%3A%22Birkenfeld%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Ullrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F.%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Solimena%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Wagner%22%7D%5D%2C%22abstractNote%22%3A%22Type%202%20diabetes%20%28T2D%29%20is%20a%20chronic%20disease%20currently%20affecting%20around%20500%20million%20people%20worldwide%20with%20often%20severe%20health%20consequences.%20Yet%2C%20histopathological%20analyses%20are%20still%20inadequate%20to%20infer%20the%20glycaemic%20state%20of%20a%20person%20based%20on%20morphological%20alterations%20linked%20to%20impaired%20insulin%20secretion%20and%20%5Cu03b2-cell%20failure%20in%20T2D.%20Giga-pixel%20microscopy%20can%20capture%20subtle%20morphological%20changes%2C%20but%20data%20complexity%20exceeds%20human%20analysis%20capabilities.%20In%20response%2C%20we%20generated%20a%20dataset%20of%20pancreas%20whole-slide%20images%20with%20multiple%20chromogenic%20and%20multiplex%20fluorescent%20stainings%20and%20trained%20deep%20learning%20models%20to%20predict%20the%20T2D%20status.%20Using%20explainable%20AI%2C%20we%20made%20the%20learned%20relationships%20interpretable%2C%20quantified%20them%20as%20biomarkers%2C%20and%20assessed%20their%20association%20with%20T2D.%20Remarkably%2C%20the%20highest%20prediction%20performance%20was%20achieved%20by%20simultaneously%20focusing%20on%20islet%20%5Cu03b1-and%20%5Cu03b4-cells%20and%20neuronal%20axons.%20Subtle%20alterations%20in%20the%20pancreatic%20tissue%20of%20T2D%20donors%20such%20as%20smaller%20islets%2C%20larger%20adipocyte%20clusters%2C%20altered%20islet-adipocyte%20proximity%2C%20and%20fibrotic%20patterns%20were%20also%20observed.%20Our%20innovative%20data-driven%20approach%20underpins%20key%20findings%20about%20pancreatic%20tissue%20alterations%20in%20T2D%20and%20provides%20novel%20targets%20for%20research.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22medRxiv%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222024-10-23%22%2C%22DOI%22%3A%2210.1101%5C%2F2024.10.23.24315937%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.medrxiv.org%5C%2Fcontent%5C%2F10.1101%5C%2F2024.10.23.24315937v1%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A10%3A58Z%22%7D%7D%2C%7B%22key%22%3A%223443CFTY%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klein%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlein%2C%20L.%2C%20Amara%2C%20K.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Strobelt%2C%20H.%2C%20El-Assady%2C%20M.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282024%2C%20October%2012%29.%20%26lt%3Bb%26gt%3BInteractive%20Semantic%20Interventions%20for%20VLMs%3A%20A%20Human-in-the-Loop%20Investigation%20of%20VLM%20Failure%26lt%3B%5C%2Fb%26gt%3B.%20Neurips%20Safe%20Generative%20AI%20Workshop%202024.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D3kMucCYhYN%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D3kMucCYhYN%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Interactive%20Semantic%20Interventions%20for%20VLMs%3A%20A%20Human-in-the-Loop%20Investigation%20of%20VLM%20Failure%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenza%22%2C%22lastName%22%3A%22Amara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hendrik%22%2C%22lastName%22%3A%22Strobelt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mennatallah%22%2C%22lastName%22%3A%22El-Assady%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22Vision%20Language%20Models%20%28VLMs%29%2C%20like%20ChatGPT-o%20and%20LLaVA%2C%20exhibit%20exceptional%20versatility%20across%20a%20wide%20array%20of%20tasks%20with%20minimal%20adaptation%20due%20to%20their%20ability%20to%20seamlessly%20integrate%20visual%20and%20textual%20data.%20However%2C%20model%20failure%20remains%20a%20crucial%20problem%20in%20VLMs%2C%20particularly%20when%20they%20produce%20incorrect%20outputs%20such%20as%20hallucinations%20or%20confabulations.%20These%20failures%20can%20be%20detected%20and%20analyzed%20by%20leveraging%20model%20interpretability%20methods%20and%20controlling%20input%20semantics%2C%20providing%20valuable%20insights%20into%20how%20different%20modalities%20influence%20model%20behavior%20and%20guiding%20improvements%20in%20model%20architecture%20for%20greater%20accuracy%20and%20robustness.%20To%20address%20this%20challenge%2C%20we%20introduce%20Interactive%20Semantic%20Interventions%20%28ISI%29%2C%20a%20tool%20designed%20to%20enable%20researchers%20and%20VLM%20users%20to%20investigate%20how%20these%20models%20respond%20to%20semantic%20changes%20and%20interventions%20across%20image%20and%20text%20modalities%2C%20with%20a%20focus%20on%20identifying%20potential%20model%20failures%20in%20the%20context%20of%20Visual%20Question%20Answering%20%28VQA%29.%20Specifically%2C%20it%20offers%20an%20interface%20and%20pipeline%20for%20semantically%20meaningful%20interventions%20on%20both%20image%20and%20text%2C%20while%20quantitatively%20evaluating%20the%20generated%20output%20in%20terms%20of%20modality%20importance%20and%20model%20uncertainty.%20Alongside%20the%20tool%20we%20publish%20a%20specifically%20tailored%20VQA%20dataset%20including%20predefined%20presets%20for%20semantic%20meaningful%20interventions%20on%20image%20and%20text%20modalities.%20ISI%20empowers%20researchers%20and%20users%20to%20gain%20deeper%20insights%20into%20VLM%20behavior%2C%20facilitating%20more%20effective%20troubleshooting%20to%20prevent%20and%20understand%20model%20failures.%20It%20also%20establishes%20a%20well-evaluated%20foundation%20before%20conducting%20large-scale%20VLM%20experiments.%20The%20tool%20and%20dataset%20are%20hosted%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FIML-DKFZ%5C%2Fisi-vlm.%22%2C%22date%22%3A%222024%5C%2F10%5C%2F12%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Neurips%20Safe%20Generative%20AI%20Workshop%202024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D3kMucCYhYN%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A09%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22ANK6I4K4%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mahmutoglu%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMahmutoglu%2C%20M.%20A.%2C%20Rastogi%2C%20A.%2C%20Schell%2C%20M.%2C%20Foltyn-Dumitru%2C%20M.%2C%20Baumgartner%2C%20M.%2C%20Maier-Hein%2C%20K.%20H.%2C%20Deike-Hofmann%2C%20K.%2C%20Radbruch%2C%20A.%2C%20Bendszus%2C%20M.%2C%20Brugnara%2C%20G.%2C%20%26amp%3B%20Vollmuth%2C%20P.%20%282024%29.%20%26lt%3Bb%26gt%3BDeep%20learning-based%20defacing%20tool%20for%20CT%20angiography%3A%20CTA-DEFACE%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BEuropean%20Radiology%20Experimental%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%20111.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs41747-024-00510-9%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs41747-024-00510-9%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning-based%20defacing%20tool%20for%20CT%20angiography%3A%20CTA-DEFACE%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%20Ahmed%22%2C%22lastName%22%3A%22Mahmutoglu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aditya%22%2C%22lastName%22%3A%22Rastogi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianne%22%2C%22lastName%22%3A%22Schell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martha%22%2C%22lastName%22%3A%22Foltyn-Dumitru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20Hermann%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Katerina%22%2C%22lastName%22%3A%22Deike-Hofmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Radbruch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Bendszus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gianluca%22%2C%22lastName%22%3A%22Brugnara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Vollmuth%22%7D%5D%2C%22abstractNote%22%3A%22The%20growing%20use%20of%20artificial%20neural%20network%20%28ANN%29%20tools%20for%20computed%20tomography%20angiography%20%28CTA%29%20data%20analysis%20underscores%20the%20necessity%20for%20elevated%20data%20protection%20measures.%20We%20aimed%20to%20establish%20an%20automated%20defacing%20pipeline%20for%20CTA%20data.%20In%20this%20retrospective%20study%2C%20CTA%20data%20from%20multi-institutional%20cohorts%20were%20utilized%20to%20annotate%20facemasks%20%28n%5Cu2009%3D%5Cu2009100%29%20and%20train%20an%20ANN%20model%2C%20subsequently%20tested%20on%20an%20external%20institution%26%23039%3Bs%20dataset%20%28n%5Cu2009%3D%5Cu200950%29%20and%20compared%20to%20a%20publicly%20available%20defacing%20algorithm.%20Face%20detection%20%28MTCNN%29%20and%20verification%20%28FaceNet%29%20networks%20were%20applied%20to%20measure%20the%20similarity%20between%20the%20original%20and%20defaced%20CTA%20images.%20Dice%20similarity%20coefficient%20%28DSC%29%2C%20face%20detection%20probability%2C%20and%20face%20similarity%20measures%20were%20calculated%20to%20evaluate%20model%20performance.%20The%20CTA-DEFACE%20model%20effectively%20segmented%20soft%20face%20tissue%20in%20CTA%20data%20achieving%20a%20DSC%20of%200.94%5Cu2009%5Cu00b1%5Cu20090.02%20%28mean%5Cu2009%5Cu00b1%5Cu2009standard%20deviation%29%20on%20the%20test%20set.%20Our%20model%20was%20benchmarked%20against%20a%20publicly%20available%20defacing%20algorithm.%20After%20applying%20face%20detection%20and%20verification%20networks%2C%20our%20model%20showed%20substantially%20reduced%20face%20detection%20probability%20%28p%5Cu2009%26lt%3B%5Cu20090.001%29%20and%20similarity%20to%20the%20original%20CTA%20image%20%28p%5Cu2009%26lt%3B%5Cu20090.001%29.%20The%20CTA-DEFACE%20model%20enabled%20robust%20and%20precise%20defacing%20of%20CTA%20data.%20The%20trained%20network%20is%20publicly%20accessible%20at%20www.github.com%5C%2FneuroAI-HD%5C%2FCTA-DEFACE%20.%20RELEVANCE%20STATEMENT%3A%20The%20ANN%20model%20CTA-DEFACE%2C%20developed%20for%20automatic%20defacing%20of%20CT%20angiography%20images%2C%20achieves%20significantly%20lower%20face%20detection%20probabilities%20and%20greater%20dissimilarity%20from%20the%20original%20images%20compared%20to%20a%20publicly%20available%20model.%20The%20algorithm%20has%20been%20externally%20validated%20and%20is%20publicly%20accessible.%20KEY%20POINTS%3A%20The%20developed%20ANN%20model%20%28CTA-DEFACE%29%20automatically%20generates%20facemasks%20for%20CT%20angiography%20images.%20CTA-DEFACE%20offers%20superior%20deidentification%20capabilities%20compared%20to%20a%20publicly%20available%20model.%20By%20means%20of%20graphics%20processing%20unit%20optimization%2C%20our%20model%20ensures%20rapid%20processing%20of%20medical%20images.%20Our%20model%20underwent%20external%20validation%2C%20underscoring%20its%20reliability%20for%20real-world%20application.%22%2C%22date%22%3A%222024-10-09%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%2210.1186%5C%2Fs41747-024-00510-9%22%2C%22ISSN%22%3A%222509-9280%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A35%3A23Z%22%7D%7D%2C%7B%22key%22%3A%222T7MYFA9%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Amara%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-02%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAmara%2C%20K.%2C%20Klein%2C%20L.%2C%20L%26%23xFC%3Bth%2C%20C.%2C%20J%26%23xE4%3Bger%2C%20P.%2C%20Strobelt%2C%20H.%2C%20%26amp%3B%20El-Assady%2C%20M.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BWhy%20context%20matters%20in%20VQA%20and%20Reasoning%3A%20Semantic%20interventions%20for%20VLM%20input%20modalities%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2410.01690%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2410.01690%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2410.01690%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Why%20context%20matters%20in%20VQA%20and%20Reasoning%3A%20Semantic%20interventions%20for%20VLM%20input%20modalities%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenza%22%2C%22lastName%22%3A%22Amara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hendrik%22%2C%22lastName%22%3A%22Strobelt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mennatallah%22%2C%22lastName%22%3A%22El-Assady%22%7D%5D%2C%22abstractNote%22%3A%22The%20various%20limitations%20of%20Generative%20AI%2C%20such%20as%20hallucinations%20and%20model%20failures%2C%20have%20made%20it%20crucial%20to%20understand%20the%20role%20of%20different%20modalities%20in%20Visual%20Language%20Model%20%28VLM%29%20predictions.%20Our%20work%20investigates%20how%20the%20integration%20of%20information%20from%20image%20and%20text%20modalities%20influences%20the%20performance%20and%20behavior%20of%20VLMs%20in%20visual%20question%20answering%20%28VQA%29%20and%20reasoning%20tasks.%20We%20measure%20this%20effect%20through%20answer%20accuracy%2C%20reasoning%20quality%2C%20model%20uncertainty%2C%20and%20modality%20relevance.%20We%20study%20the%20interplay%20between%20text%20and%20image%20modalities%20in%20different%20configurations%20where%20visual%20content%20is%20essential%20for%20solving%20the%20VQA%20task.%20Our%20contributions%20include%20%281%29%20the%20Semantic%20Interventions%20%28SI%29-VQA%20dataset%2C%20%282%29%20a%20benchmark%20study%20of%20various%20VLM%20architectures%20under%20different%20modality%20configurations%2C%20and%20%283%29%20the%20Interactive%20Semantic%20Interventions%20%28ISI%29%20tool.%20The%20SI-VQA%20dataset%20serves%20as%20the%20foundation%20for%20the%20benchmark%2C%20while%20the%20ISI%20tool%20provides%20an%20interface%20to%20test%20and%20apply%20semantic%20interventions%20in%20image%20and%20text%20inputs%2C%20enabling%20more%20fine-grained%20analysis.%20Our%20results%20show%20that%20complementary%20information%20between%20modalities%20improves%20answer%20and%20reasoning%20quality%2C%20while%20contradictory%20information%20harms%20model%20performance%20and%20confidence.%20Image%20text%20annotations%20have%20minimal%20impact%20on%20accuracy%20and%20uncertainty%2C%20slightly%20increasing%20image%20relevance.%20Attention%20analysis%20confirms%20the%20dominant%20role%20of%20image%20inputs%20over%20text%20in%20VQA%20tasks.%20In%20this%20study%2C%20we%20evaluate%20state-of-the-art%20VLMs%20that%20allow%20us%20to%20extract%20attention%20coefficients%20for%20each%20modality.%20A%20key%20finding%20is%20PaliGemma%26%23039%3Bs%20harmful%20overconfidence%2C%20which%20poses%20a%20higher%20risk%20of%20silent%20failures%20compared%20to%20the%20LLaVA%20models.%20This%20work%20sets%20the%20foundation%20for%20rigorous%20analysis%20of%20modality%20integration%2C%20supported%20by%20datasets%20specifically%20designed%20for%20this%20purpose.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2410.01690%22%2C%22date%22%3A%222024-10-02%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2410.01690%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2410.01690%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A10%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22Y6HQFQFI%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A9737705%2C%22username%22%3A%22Niebelsa%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fniebelsa%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Christodoulou%20et%20al.%22%2C%22parsedDate%22%3A%222024-09-27%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChristodoulou%2C%20E.%2C%20Reinke%2C%20A.%2C%20Houhou%2C%20R.%2C%20Kalinowski%2C%20P.%2C%20Erkan%2C%20S.%2C%20Sudre%2C%20C.%20H.%2C%20Burgos%2C%20N.%2C%20Boutaj%2C%20S.%2C%20Loizillon%2C%20S.%2C%20Solal%2C%20M.%2C%20Rieke%2C%20N.%2C%20Cheplygina%2C%20V.%2C%20Antonelli%2C%20M.%2C%20Mayer%2C%20L.%20D.%2C%20Tizabi%2C%20M.%20D.%2C%20Cardoso%2C%20M.%20J.%2C%20Simpson%2C%20A.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20Kopp-Schneider%2C%20A.%2C%20%26%23x2026%3B%20Maier-Hein%2C%20L.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BConfidence%20intervals%20uncovered%3A%20Are%20we%20ready%20for%20real-world%20medical%20imaging%20AI%3F%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2409.17763%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2409.17763%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2409.17763%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Confidence%20intervals%20uncovered%3A%20Are%20we%20ready%20for%20real-world%20medical%20imaging%20AI%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Evangelia%22%2C%22lastName%22%3A%22Christodoulou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rola%22%2C%22lastName%22%3A%22Houhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Piotr%22%2C%22lastName%22%3A%22Kalinowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Selen%22%2C%22lastName%22%3A%22Erkan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carole%20H.%22%2C%22lastName%22%3A%22Sudre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ninon%22%2C%22lastName%22%3A%22Burgos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofi%5Cu00e8ne%22%2C%22lastName%22%3A%22Boutaj%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sophie%22%2C%22lastName%22%3A%22Loizillon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ma%5Cu00eblys%22%2C%22lastName%22%3A%22Solal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicola%22%2C%22lastName%22%3A%22Rieke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Veronika%22%2C%22lastName%22%3A%22Cheplygina%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michela%22%2C%22lastName%22%3A%22Antonelli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leon%20D.%22%2C%22lastName%22%3A%22Mayer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minu%20D.%22%2C%22lastName%22%3A%22Tizabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Jorge%22%2C%22lastName%22%3A%22Cardoso%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amber%22%2C%22lastName%22%3A%22Simpson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annette%22%2C%22lastName%22%3A%22Kopp-Schneider%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ga%5Cu00ebl%22%2C%22lastName%22%3A%22Varoquaux%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olivier%22%2C%22lastName%22%3A%22Colliot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Medical%20imaging%20is%20spearheading%20the%20AI%20transformation%20of%20healthcare.%20Performance%20reporting%20is%20key%20to%20determine%20which%20methods%20should%20be%20translated%20into%20clinical%20practice.%20Frequently%2C%20broad%20conclusions%20are%20simply%20derived%20from%20mean%20performance%20values.%20In%20this%20paper%2C%20we%20argue%20that%20this%20common%20practice%20is%20often%20a%20misleading%20simplification%20as%20it%20ignores%20performance%20variability.%20Our%20contribution%20is%20threefold.%20%281%29%20Analyzing%20all%20MICCAI%20segmentation%20papers%20%28n%20%3D%20221%29%20published%20in%202023%2C%20we%20first%20observe%20that%20more%20than%2050%25%20of%20papers%20do%20not%20assess%20performance%20variability%20at%20all.%20Moreover%2C%20only%20one%20%280.5%25%29%20paper%20reported%20confidence%20intervals%20%28CIs%29%20for%20model%20performance.%20%282%29%20To%20address%20the%20reporting%20bottleneck%2C%20we%20show%20that%20the%20unreported%20standard%20deviation%20%28SD%29%20in%20segmentation%20papers%20can%20be%20approximated%20by%20a%20second-order%20polynomial%20function%20of%20the%20mean%20Dice%20similarity%20coefficient%20%28DSC%29.%20Based%20on%20external%20validation%20data%20from%2056%20previous%20MICCAI%20challenges%2C%20we%20demonstrate%20that%20this%20approximation%20can%20accurately%20reconstruct%20the%20CI%20of%20a%20method%20using%20information%20provided%20in%20publications.%20%283%29%20Finally%2C%20we%20reconstructed%2095%25%20CIs%20around%20the%20mean%20DSC%20of%20MICCAI%202023%20segmentation%20papers.%20The%20median%20CI%20width%20was%200.03%20which%20is%20three%20times%20larger%20than%20the%20median%20performance%20gap%20between%20the%20first%20and%20second%20ranked%20method.%20For%20more%20than%2060%25%20of%20papers%2C%20the%20mean%20performance%20of%20the%20second-ranked%20method%20was%20within%20the%20CI%20of%20the%20first-ranked%20method.%20We%20conclude%20that%20current%20publications%20typically%20do%20not%20provide%20sufficient%20evidence%20to%20support%20which%20models%20could%20potentially%20be%20translated%20into%20clinical%20practice.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2409.17763%22%2C%22date%22%3A%222024-09-27%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2409.17763%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2409.17763%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A03%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22MEJ27KQ7%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Denner%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-28%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDenner%2C%20S.%2C%20Bujotzek%2C%20M.%2C%20Bounias%2C%20D.%2C%20Zimmerer%2C%20D.%2C%20Stock%2C%20R.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BVisual%20Prompt%20Engineering%20for%20Medical%20Vision%20Language%20Models%20in%20Radiology%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2408.15802%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.15802%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.15802%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Visual%20Prompt%20Engineering%20for%20Medical%20Vision%20Language%20Models%20in%20Radiology%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Denner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Markus%22%2C%22lastName%22%3A%22Bujotzek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dimitrios%22%2C%22lastName%22%3A%22Bounias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Zimmerer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raphael%22%2C%22lastName%22%3A%22Stock%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Medical%20image%20classification%20in%20radiology%20faces%20significant%20challenges%2C%20particularly%20in%20generalizing%20to%20unseen%20pathologies.%20In%20contrast%2C%20CLIP%20offers%20a%20promising%20solution%20by%20leveraging%20multimodal%20learning%20to%20improve%20zero-shot%20classification%20performance.%20However%2C%20in%20the%20medical%20domain%2C%20lesions%20can%20be%20small%20and%20might%20not%20be%20well%20represented%20in%20the%20embedding%20space.%20Therefore%2C%20in%20this%20paper%2C%20we%20explore%20the%20potential%20of%20visual%20prompt%20engineering%20to%20enhance%20the%20capabilities%20of%20Vision%20Language%20Models%20%28VLMs%29%20in%20radiology.%20Leveraging%20BiomedCLIP%2C%20trained%20on%20extensive%20biomedical%20image-text%20pairs%2C%20we%20investigate%20the%20impact%20of%20embedding%20visual%20markers%20directly%20within%20radiological%20images%20to%20guide%20the%20model%26%23039%3Bs%20attention%20to%20critical%20regions.%20Our%20evaluation%20on%20the%20JSRT%20dataset%2C%20focusing%20on%20lung%20nodule%20malignancy%20classification%2C%20demonstrates%20that%20incorporating%20visual%20prompts%20%24%5C%5Cunicode%7Bx2013%7D%24%20such%20as%20arrows%2C%20circles%2C%20and%20contours%20%24%5C%5Cunicode%7Bx2013%7D%24%20significantly%20improves%20classification%20metrics%20including%20AUROC%2C%20AUPRC%2C%20F1%20score%2C%20and%20accuracy.%20Moreover%2C%20the%20study%20provides%20attention%20maps%2C%20showcasing%20enhanced%20model%20interpretability%20and%20focus%20on%20clinically%20relevant%20areas.%20These%20findings%20underscore%20the%20efficacy%20of%20visual%20prompt%20engineering%20as%20a%20straightforward%20yet%20powerful%20approach%20to%20advance%20VLM%20performance%20in%20medical%20image%20analysis.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2408.15802%22%2C%22date%22%3A%222024-08-28%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2408.15802%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2408.15802%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A32%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22U5L5WEDG%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dorent%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-19%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDorent%2C%20R.%2C%20Khajavi%2C%20R.%2C%20Idris%2C%20T.%2C%20Ziegler%2C%20E.%2C%20Somarouthu%2C%20B.%2C%20Jacene%2C%20H.%2C%20LaCasce%2C%20A.%2C%20Deissler%2C%20J.%2C%20Ehrhardt%2C%20J.%2C%20Engelson%2C%20S.%2C%20Fischer%2C%20S.%20M.%2C%20Gu%2C%20Y.%2C%20Handels%2C%20H.%2C%20Kasai%2C%20S.%2C%20Kondo%2C%20S.%2C%20Maier-Hein%2C%20K.%2C%20Schnabel%2C%20J.%20A.%2C%20Wang%2C%20G.%2C%20Wang%2C%20L.%2C%20%26%23x2026%3B%20Kapur%2C%20T.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BLNQ%202023%20challenge%3A%20Benchmark%20of%20weakly-supervised%20techniques%20for%20mediastinal%20lymph%20node%20quantification%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2408.10069%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.10069%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.10069%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22LNQ%202023%20challenge%3A%20Benchmark%20of%20weakly-supervised%20techniques%20for%20mediastinal%20lymph%20node%20quantification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Reuben%22%2C%22lastName%22%3A%22Dorent%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Roya%22%2C%22lastName%22%3A%22Khajavi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tagwa%22%2C%22lastName%22%3A%22Idris%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Erik%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bhanusupriya%22%2C%22lastName%22%3A%22Somarouthu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heather%22%2C%22lastName%22%3A%22Jacene%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ann%22%2C%22lastName%22%3A%22LaCasce%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonathan%22%2C%22lastName%22%3A%22Deissler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22Ehrhardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofija%22%2C%22lastName%22%3A%22Engelson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%20M.%22%2C%22lastName%22%3A%22Fischer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yun%22%2C%22lastName%22%3A%22Gu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heinz%22%2C%22lastName%22%3A%22Handels%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Satoshi%22%2C%22lastName%22%3A%22Kasai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Satoshi%22%2C%22lastName%22%3A%22Kondo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julia%20A.%22%2C%22lastName%22%3A%22Schnabel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guotai%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Litingyu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guang-Zhong%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanxiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minghui%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steve%22%2C%22lastName%22%3A%22Pieper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gordon%22%2C%22lastName%22%3A%22Harris%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ron%22%2C%22lastName%22%3A%22Kikinis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tina%22%2C%22lastName%22%3A%22Kapur%22%7D%5D%2C%22abstractNote%22%3A%22Accurate%20assessment%20of%20lymph%20node%20size%20in%203D%20CT%20scans%20is%20crucial%20for%20cancer%20staging%2C%20therapeutic%20management%2C%20and%20monitoring%20treatment%20response.%20Existing%20state-of-the-art%20segmentation%20frameworks%20in%20medical%20imaging%20often%20rely%20on%20fully%20annotated%20datasets.%20However%2C%20for%20lymph%20node%20segmentation%2C%20these%20datasets%20are%20typically%20small%20due%20to%20the%20extensive%20time%20and%20expertise%20required%20to%20annotate%20the%20numerous%20lymph%20nodes%20in%203D%20CT%20scans.%20Weakly-supervised%20learning%2C%20which%20leverages%20incomplete%20or%20noisy%20annotations%2C%20has%20recently%20gained%20interest%20in%20the%20medical%20imaging%20community%20as%20a%20potential%20solution.%20Despite%20the%20variety%20of%20weakly-supervised%20techniques%20proposed%2C%20most%20have%20been%20validated%20only%20on%20private%20datasets%20or%20small%20publicly%20available%20datasets.%20To%20address%20this%20limitation%2C%20the%20Mediastinal%20Lymph%20Node%20Quantification%20%28LNQ%29%20challenge%20was%20organized%20in%20conjunction%20with%20the%2026th%20International%20Conference%20on%20Medical%20Image%20Computing%20and%20Computer%20Assisted%20Intervention%20%28MICCAI%202023%29.%20This%20challenge%20aimed%20to%20advance%20weakly-supervised%20segmentation%20methods%20by%20providing%20a%20new%2C%20partially%20annotated%20dataset%20and%20a%20robust%20evaluation%20framework.%20A%20total%20of%2016%20teams%20from%205%20countries%20submitted%20predictions%20to%20the%20validation%20leaderboard%2C%20and%206%20teams%20from%203%20countries%20participated%20in%20the%20evaluation%20phase.%20The%20results%20highlighted%20both%20the%20potential%20and%20the%20current%20limitations%20of%20weakly-supervised%20approaches.%20On%20one%20hand%2C%20weakly-supervised%20approaches%20obtained%20relatively%20good%20performance%20with%20a%20median%20Dice%20score%20of%20%2461.0%5C%5C%25%24.%20On%20the%20other%20hand%2C%20top-ranked%20teams%2C%20with%20a%20median%20Dice%20score%20exceeding%20%2470%5C%5C%25%24%2C%20boosted%20their%20performance%20by%20leveraging%20smaller%20but%20fully%20annotated%20datasets%20to%20combine%20weak%20supervision%20and%20full%20supervision.%20This%20highlights%20both%20the%20promise%20of%20weakly-supervised%20methods%20and%20the%20ongoing%20need%20for%20high-quality%2C%20fully%20annotated%20data%20to%20achieve%20higher%20segmentation%20performance.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2408.10069%22%2C%22date%22%3A%222024-08-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2408.10069%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2408.10069%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A15%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22APM2V2SP%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22D.%20Almeida%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-07%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BD.%20Almeida%2C%20S.%2C%20Norajitra%2C%20T.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Wald%2C%20T.%2C%20Weru%2C%20V.%2C%20Nolden%2C%20M.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20von%20Stackelberg%2C%20O.%2C%20Heu%26%23xDF%3Bel%2C%20C.%20P.%2C%20Weinheimer%2C%20O.%2C%20Biederer%2C%20J.%2C%20Kauczor%2C%20H.-U.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bb%26gt%3BHow%20do%20deep-learning%20models%20generalize%20across%20populations%3F%20Cross-ethnicity%20generalization%20of%20COPD%20detection%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BInsights%20into%20Imaging%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B15%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%20198.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01781-x%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01781-x%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22How%20do%20deep-learning%20models%20generalize%20across%20populations%3F%20Cross-ethnicity%20generalization%20of%20COPD%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%22%2C%22lastName%22%3A%22D.%20Almeida%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Norajitra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vivienn%22%2C%22lastName%22%3A%22Weru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oyunbileg%22%2C%22lastName%22%3A%22von%20Stackelberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claus%20Peter%22%2C%22lastName%22%3A%22Heu%5Cu00dfel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Weinheimer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00fcrgen%22%2C%22lastName%22%3A%22Biederer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hans-Ulrich%22%2C%22lastName%22%3A%22Kauczor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22To%20evaluate%20the%20performance%20and%20potential%20biases%20of%20deep-learning%20models%20in%20detecting%20chronic%20obstructive%20pulmonary%20disease%20%28COPD%29%20on%20chest%20CT%20scans%20across%20different%20ethnic%20groups%2C%20specifically%20non-Hispanic%20White%20%28NHW%29%20and%20African%20American%20%28AA%29%20populations.%22%2C%22date%22%3A%222024-08-07%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1186%5C%2Fs13244-024-01781-x%22%2C%22ISSN%22%3A%221869-4101%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01781-x%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A25%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22E94GVQWR%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klabunde%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-01%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlabunde%2C%20M.%2C%20Wald%2C%20T.%2C%20Schumacher%2C%20T.%2C%20Maier-Hein%2C%20K.%2C%20Strohmaier%2C%20M.%2C%20%26amp%3B%20Lemmerich%2C%20F.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BReSi%3A%20A%20Comprehensive%20Benchmark%20for%20Representational%20Similarity%20Measures%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2408.00531%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.00531%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2408.00531%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22ReSi%3A%20A%20Comprehensive%20Benchmark%20for%20Representational%20Similarity%20Measures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Max%22%2C%22lastName%22%3A%22Klabunde%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Schumacher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Markus%22%2C%22lastName%22%3A%22Strohmaier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Lemmerich%22%7D%5D%2C%22abstractNote%22%3A%22Measuring%20the%20similarity%20of%20different%20representations%20of%20neural%20architectures%20is%20a%20fundamental%20task%20and%20an%20open%20research%20challenge%20for%20the%20machine%20learning%20community.%20This%20paper%20presents%20the%20first%20comprehensive%20benchmark%20for%20evaluating%20representational%20similarity%20measures%20based%20on%20well-defined%20groundings%20of%20similarity.%20The%20representational%20similarity%20%28ReSi%29%20benchmark%20consists%20of%20%28i%29%20six%20carefully%20designed%20tests%20for%20similarity%20measures%2C%20%28ii%29%2023%20similarity%20measures%2C%20%28iii%29%20eleven%20neural%20network%20architectures%2C%20and%20%28iv%29%20six%20datasets%2C%20spanning%20over%20the%20graph%2C%20language%2C%20and%20vision%20domains.%20The%20benchmark%20opens%20up%20several%20important%20avenues%20of%20research%20on%20representational%20similarity%20that%20enable%20novel%20explorations%20and%20applications%20of%20neural%20architectures.%20We%20demonstrate%20the%20utility%20of%20the%20ReSi%20benchmark%20by%20conducting%20experiments%20on%20various%20neural%20network%20architectures%2C%20real%20world%20datasets%20and%20similarity%20measures.%20All%20components%20of%20the%20benchmark%20are%20publicly%20available%20and%20thereby%20facilitate%20systematic%20reproduction%20and%20production%20of%20research%20results.%20The%20benchmark%20is%20extensible%2C%20future%20research%20can%20build%20on%20and%20further%20expand%20it.%20We%20believe%20that%20the%20ReSi%20benchmark%20can%20serve%20as%20a%20sound%20platform%20catalyzing%20future%20research%20that%20aims%20to%20systematically%20evaluate%20existing%20and%20explore%20novel%20ways%20of%20comparing%20representations%20of%20neural%20architectures.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2408.00531%22%2C%22date%22%3A%222024-08-01%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2408.00531%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2408.00531%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A38%3A46Z%22%7D%7D%2C%7B%22key%22%3A%22MY4ZUGZ6%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ozkan%20et%20al.%22%2C%22parsedDate%22%3A%222024-08%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOzkan%2C%20S.%2C%20Selver%2C%20M.%20A.%2C%20Baydar%2C%20B.%2C%20Kavur%2C%20A.%20E.%2C%20Candemir%2C%20C.%2C%20%26amp%3B%20Akar%2C%20G.%20B.%20%282024%29.%20%26lt%3Bb%26gt%3BCross-Modal%20Learning%20via%20Adversarial%20Loss%20and%20Covariate%20Shift%20for%20Enhanced%20Liver%20Segmentation%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Emerging%20Topics%20in%20Computational%20Intelligence%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B%284%29%2C%202723%26%23x2013%3B2735.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTETCI.2024.3369868%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTETCI.2024.3369868%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cross-Modal%20Learning%20via%20Adversarial%20Loss%20and%20Covariate%20Shift%20for%20Enhanced%20Liver%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Savas%22%2C%22lastName%22%3A%22Ozkan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Alper%22%2C%22lastName%22%3A%22Selver%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bora%22%2C%22lastName%22%3A%22Baydar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ali%20Emre%22%2C%22lastName%22%3A%22Kavur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cemre%22%2C%22lastName%22%3A%22Candemir%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gozde%20Bozdagi%22%2C%22lastName%22%3A%22Akar%22%7D%5D%2C%22abstractNote%22%3A%22Despite%20the%20widespread%20use%20of%20deep%20learning%20methods%20for%20semantic%20segmentation%20from%20single%20imaging%20modalities%2C%20their%20performance%20for%20exploiting%20multi-domain%20data%20still%20needs%20to%20improve.%20However%2C%20the%20decision-making%20process%20in%20radiology%20is%20often%20guided%20by%20data%20from%20multiple%20sources%2C%20such%20as%20pre-operative%20evaluation%20of%20living%20donated%20liver%20transplantation%20donors.%20In%20such%20cases%2C%20cross-modality%20performances%20of%20deep%20models%20become%20more%20important.%20Unfortunately%2C%20the%20domain-dependency%20of%20existing%20techniques%20limits%20their%20clinical%20acceptability%2C%20primarily%20confining%20their%20performance%20to%20individual%20domains.%20This%20issue%20is%20further%20formulated%20as%20a%20multi-source%20domain%20adaptation%20problem%2C%20which%20is%20an%20emerging%20field%20mainly%20due%20to%20the%20diverse%20pattern%20characteristics%20exhibited%20from%20cross-modality%20data.%20This%20paper%20presents%20a%20novel%20method%20that%20can%20learn%20robust%20representations%20from%20unpaired%20cross-modal%20%28CT-MR%29%20data%20by%20encapsulating%20distinct%20and%20shared%20patterns%20from%20multiple%20modalities.%20In%20our%20solution%2C%20the%20covariate%20shift%20property%20is%20maintained%20with%20structural%20modifications%20in%20our%20architecture.%20Also%2C%20an%20adversarial%20loss%20is%20adopted%20to%20boost%20the%20representation%20capacity.%20As%20a%20result%2C%20sparse%20and%20rich%20representations%20are%20obtained.%20Another%20superiority%20of%20our%20model%20is%20that%20no%20information%20about%20modalities%20is%20needed%20at%20the%20training%20or%20inference%20phase.%20Tests%20on%20unpaired%20CT%20and%20MR%20liver%20data%20obtained%20from%20the%20cross-modality%20task%20of%20the%20CHAOS%20grand%20challenge%20demonstrate%20that%20our%20approach%20achieves%20state-of-the-art%20results%20with%20a%20large%20margin%20in%20both%20individual%20metrics%20and%20overall%20scores.%22%2C%22date%22%3A%222024-08%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTETCI.2024.3369868%22%2C%22ISSN%22%3A%222471-285X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10463530%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A14%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22YMWVH5AM%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22R%5Cu00e4dsch%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-26%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BR%26%23xE4%3Bdsch%2C%20T.%2C%20Reinke%2C%20A.%2C%20Weru%2C%20V.%2C%20Tizabi%2C%20M.%20D.%2C%20Heller%2C%20N.%2C%20Isensee%2C%20F.%2C%20Kopp-Schneider%2C%20A.%2C%20%26amp%3B%20Maier-Hein%2C%20L.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BQuality%20Assured%3A%20Rethinking%20Annotation%20Strategies%20in%20Imaging%20AI%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2407.17596%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2407.17596%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2407.17596%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Quality%20Assured%3A%20Rethinking%20Annotation%20Strategies%20in%20Imaging%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tim%22%2C%22lastName%22%3A%22R%5Cu00e4dsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vivienn%22%2C%22lastName%22%3A%22Weru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minu%20D.%22%2C%22lastName%22%3A%22Tizabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicholas%22%2C%22lastName%22%3A%22Heller%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annette%22%2C%22lastName%22%3A%22Kopp-Schneider%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20does%20not%20describe%20a%20novel%20method.%20Instead%2C%20it%20studies%20an%20essential%20foundation%20for%20reliable%20benchmarking%20and%20ultimately%20real-world%20application%20of%20AI-based%20image%20analysis%3A%20generating%20high-quality%20reference%20annotations.%20Previous%20research%20has%20focused%20on%20crowdsourcing%20as%20a%20means%20of%20outsourcing%20annotations.%20However%2C%20little%20attention%20has%20so%20far%20been%20given%20to%20annotation%20companies%2C%20specifically%20regarding%20their%20internal%20quality%20assurance%20%28QA%29%20processes.%20Therefore%2C%20our%20aim%20is%20to%20evaluate%20the%20influence%20of%20QA%20employed%20by%20annotation%20companies%20on%20annotation%20quality%20and%20devise%20methodologies%20for%20maximizing%20data%20annotation%20efficacy.%20Based%20on%20a%20total%20of%2057%2C648%20instance%20segmented%20images%20obtained%20from%20a%20total%20of%20924%20annotators%20and%2034%20QA%20workers%20from%20four%20annotation%20companies%20and%20Amazon%20Mechanical%20Turk%20%28MTurk%29%2C%20we%20derived%20the%20following%20insights%3A%20%281%29%20Annotation%20companies%20perform%20better%20both%20in%20terms%20of%20quantity%20and%20quality%20compared%20to%20the%20widely%20used%20platform%20MTurk.%20%282%29%20Annotation%20companies%26%23039%3B%20internal%20QA%20only%20provides%20marginal%20improvements%2C%20if%20any.%20However%2C%20improving%20labeling%20instructions%20instead%20of%20investing%20in%20QA%20can%20substantially%20boost%20annotation%20performance.%20%283%29%20The%20benefit%20of%20internal%20QA%20depends%20on%20specific%20image%20characteristics.%20Our%20work%20could%20enable%20researchers%20to%20derive%20substantially%20more%20value%20from%20a%20fixed%20annotation%20budget%20and%20change%20the%20way%20annotation%20companies%20conduct%20internal%20QA.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2407.17596%22%2C%22date%22%3A%222024-07-26%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2407.17596%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2407.17596%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A07%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22H4A828IZ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Isensee%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-25%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BIsensee%2C%20F.%2C%20Wald%2C%20T.%2C%20Ulrich%2C%20C.%2C%20Baumgartner%2C%20M.%2C%20Roy%2C%20S.%2C%20Maier-Hein%2C%20K.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BnnU-Net%20Revisited%3A%20A%20Call%20for%20Rigorous%20Validation%20in%203D%20Medical%20Image%20Segmentation%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2404.09556%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2404.09556%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2404.09556%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22nnU-Net%20Revisited%3A%20A%20Call%20for%20Rigorous%20Validation%20in%203D%20Medical%20Image%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saikat%22%2C%22lastName%22%3A%22Roy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22The%20release%20of%20nnU-Net%20marked%20a%20paradigm%20shift%20in%203D%20medical%20image%20segmentation%2C%20demonstrating%20that%20a%20properly%20configured%20U-Net%20architecture%20could%20still%20achieve%20state-of-the-art%20results.%20Despite%20this%2C%20the%20pursuit%20of%20novel%20architectures%2C%20and%20the%20respective%20claims%20of%20superior%20performance%20over%20the%20U-Net%20baseline%2C%20continued.%20In%20this%20study%2C%20we%20demonstrate%20that%20many%20of%20these%20recent%20claims%20fail%20to%20hold%20up%20when%20scrutinized%20for%20common%20validation%20shortcomings%2C%20such%20as%20the%20use%20of%20inadequate%20baselines%2C%20insufficient%20datasets%2C%20and%20neglected%20computational%20resources.%20By%20meticulously%20avoiding%20these%20pitfalls%2C%20we%20conduct%20a%20thorough%20and%20comprehensive%20benchmarking%20of%20current%20segmentation%20methods%20including%20CNN-based%2C%20Transformer-based%2C%20and%20Mamba-based%20approaches.%20In%20contrast%20to%20current%20beliefs%2C%20we%20find%20that%20the%20recipe%20for%20state-of-the-art%20performance%20is%201%29%20employing%20CNN-based%20U-Net%20models%2C%20including%20ResNet%20and%20ConvNeXt%20variants%2C%202%29%20using%20the%20nnU-Net%20framework%2C%20and%203%29%20scaling%20models%20to%20modern%20hardware%20resources.%20These%20results%20indicate%20an%20ongoing%20innovation%20bias%20towards%20novel%20architectures%20in%20the%20field%20and%20underscore%20the%20need%20for%20more%20stringent%20validation%20standards%20in%20the%20quest%20for%20scientific%20progress.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2404.09556%22%2C%22date%22%3A%222024-07-25%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2404.09556%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2404.09556%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A22%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22G8Z8WVJM%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Almeida%20et%20al.%22%2C%22parsedDate%22%3A%222024-07%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAlmeida%2C%20S.%20D.%2C%20Norajitra%2C%20T.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Wald%2C%20T.%2C%20Weru%2C%20V.%2C%20Nolden%2C%20M.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20von%20Stackelberg%2C%20O.%2C%20Heu%26%23xDF%3Bel%2C%20C.%20P.%2C%20Weinheimer%2C%20O.%2C%20Biederer%2C%20J.%2C%20Kauczor%2C%20H.-U.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bb%26gt%3BPrediction%20of%20disease%20severity%20in%20COPD%3A%20a%20deep%20learning%20approach%20for%20anomaly-based%20quantitative%20assessment%20of%20chest%20CT%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BEuropean%20Radiology%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B34%26lt%3B%5C%2Fi%26gt%3B%287%29%2C%204379%26%23x2013%3B4392.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00330-023-10540-3%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00330-023-10540-3%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Prediction%20of%20disease%20severity%20in%20COPD%3A%20a%20deep%20learning%20approach%20for%20anomaly-based%20quantitative%20assessment%20of%20chest%20CT%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%20D.%22%2C%22lastName%22%3A%22Almeida%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Norajitra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vivienn%22%2C%22lastName%22%3A%22Weru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oyunbileg%22%2C%22lastName%22%3A%22von%20Stackelberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claus%20Peter%22%2C%22lastName%22%3A%22Heu%5Cu00dfel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Weinheimer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00fcrgen%22%2C%22lastName%22%3A%22Biederer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hans-Ulrich%22%2C%22lastName%22%3A%22Kauczor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22OBJECTIVES%3A%20To%20quantify%20regional%20manifestations%20related%20to%20COPD%20as%20anomalies%20from%20a%20modeled%20distribution%20of%20normal-appearing%20lung%20on%20chest%20CT%20using%20a%20deep%20learning%20%28DL%29%20approach%2C%20and%20to%20assess%20its%20potential%20to%20predict%20disease%20severity.%5CnMATERIALS%20AND%20METHODS%3A%20Paired%20inspiratory%5C%2Fexpiratory%20CT%20and%20clinical%20data%20from%20COPDGene%20and%20COSYCONET%20cohort%20studies%20were%20included.%20COPDGene%20data%20served%20as%20training%5C%2Fvalidation%5C%2Ftest%20data%20sets%20%28N%20%3D%203144%5C%2F786%5C%2F1310%29%20and%20COSYCONET%20as%20external%20test%20set%20%28N%20%3D%20446%29.%20To%20differentiate%20low-risk%20%28healthy%5C%2Fminimal%20disease%2C%20%5BGOLD%200%5D%29%20from%20COPD%20patients%20%28GOLD%201-4%29%2C%20the%20self-supervised%20DL%20model%20learned%20semantic%20information%20from%2050%20%5Cu00d7%2050%20%5Cu00d7%2050%20voxel%20samples%20from%20segmented%20intact%20lungs.%20An%20anomaly%20detection%20approach%20was%20trained%20to%20quantify%20lung%20abnormalities%20related%20to%20COPD%2C%20as%20regional%20deviations.%20Four%20supervised%20DL%20models%20were%20run%20for%20comparison.%20The%20clinical%20and%20radiological%20predictive%20power%20of%20the%20proposed%20anomaly%20score%20was%20assessed%20using%20linear%20mixed%20effects%20models%20%28LMM%29.%5CnRESULTS%3A%20The%20proposed%20approach%20achieved%20an%20area%20under%20the%20curve%20of%2084.3%20%5Cu00b1%200.3%20%28p%20%26lt%3B%200.001%29%20for%20COPDGene%20and%2076.3%20%5Cu00b1%200.6%20%28p%20%26lt%3B%200.001%29%20for%20COSYCONET%2C%20outperforming%20supervised%20models%20even%20when%20including%20only%20inspiratory%20CT.%20Anomaly%20scores%20significantly%20improved%20fitting%20of%20LMM%20for%20predicting%20lung%20function%2C%20health%20status%2C%20and%20quantitative%20CT%20features%20%28emphysema%5C%2Fair%20trapping%3B%20p%20%26lt%3B%200.001%29.%20Higher%20anomaly%20scores%20were%20significantly%20associated%20with%20exacerbations%20for%20both%20cohorts%20%28p%20%26lt%3B%200.001%29%20and%20greater%20dyspnea%20scores%20for%20COPDGene%20%28p%20%26lt%3B%200.001%29.%5CnCONCLUSION%3A%20Quantifying%20heterogeneous%20COPD%20manifestations%20as%20anomaly%20offers%20advantages%20over%20supervised%20methods%20and%20was%20found%20to%20be%20predictive%20for%20lung%20function%20impairment%20and%20morphology%20deterioration.%5CnCLINICAL%20RELEVANCE%20STATEMENT%3A%20Using%20deep%20learning%2C%20lung%20manifestations%20of%20COPD%20can%20be%20identified%20as%20deviations%20from%20normal-appearing%20chest%20CT%20and%20attributed%20an%20anomaly%20score%20which%20is%20consistent%20with%20decreased%20pulmonary%20function%2C%20emphysema%2C%20and%20air%20trapping.%5CnKEY%20POINTS%3A%20%5Cu2022%20A%20self-supervised%20DL%20anomaly%20detection%20method%20discriminated%20low-risk%20individuals%20and%20COPD%20subjects%2C%20outperforming%20classic%20DL%20methods%20on%20two%20datasets%20%28COPDGene%20AUC%20%3D%2084.3%25%2C%20COSYCONET%20AUC%20%3D%2076.3%25%29.%20%5Cu2022%20Our%20contrastive%20task%20exhibits%20robust%20performance%20even%20without%20the%20inclusion%20of%20expiratory%20images%2C%20while%20voxel-based%20methods%20demonstrate%20significant%20performance%20enhancement%20when%20incorporating%20expiratory%20images%2C%20in%20the%20COPDGene%20dataset.%20%5Cu2022%20Anomaly%20scores%20improved%20the%20fitting%20of%20linear%20mixed%20effects%20models%20in%20predicting%20clinical%20parameters%20and%20imaging%20alterations%20%28p%20%26lt%3B%200.001%29%20and%20were%20directly%20associated%20with%20clinical%20outcomes%20%28p%20%26lt%3B%200.001%29.%22%2C%22date%22%3A%222024-07%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%2210.1007%5C%2Fs00330-023-10540-3%22%2C%22ISSN%22%3A%221432-1084%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A23%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22DSVW4Q52%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fischer%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-18%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFischer%2C%20M.%2C%20Neher%2C%20P.%2C%20Wald%2C%20T.%2C%20Almeida%2C%20S.%20D.%2C%20Xiao%2C%20S.%2C%20Sch%26%23xFC%3Bffler%2C%20P.%2C%20Braren%2C%20R.%2C%20G%26%23xF6%3Btz%2C%20M.%2C%20Muckenhuber%2C%20A.%2C%20Kleesiek%2C%20J.%2C%20Nolden%2C%20M.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BLearned%20Image%20Compression%20for%20HE-stained%20Histopathological%20Images%20via%20Stain%20Deconvolution%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2406.12623%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2406.12623%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2406.12623%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Learned%20Image%20Compression%20for%20HE-stained%20Histopathological%20Images%20via%20Stain%20Deconvolution%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Fischer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Neher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%20Dias%22%2C%22lastName%22%3A%22Almeida%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuhan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Sch%5Cu00fcffler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rickmer%22%2C%22lastName%22%3A%22Braren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22G%5Cu00f6tz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Muckenhuber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kleesiek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Processing%20histopathological%20Whole%20Slide%20Images%20%28WSI%29%20leads%20to%20massive%20storage%20requirements%20for%20clinics%20worldwide.%20Even%20after%20lossy%20image%20compression%20during%20image%20acquisition%2C%20additional%20lossy%20compression%20is%20frequently%20possible%20without%20substantially%20affecting%20the%20performance%20of%20deep%20learning-based%20%28DL%29%20downstream%20tasks.%20In%20this%20paper%2C%20we%20show%20that%20the%20commonly%20used%20JPEG%20algorithm%20is%20not%20best%20suited%20for%20further%20compression%20and%20we%20propose%20Stain%20Quantized%20Latent%20Compression%20%28SQLC%20%29%2C%20a%20novel%20DL%20based%20histopathology%20data%20compression%20approach.%20SQLC%20compresses%20staining%20and%20RGB%20channels%20before%20passing%20it%20through%20a%20compression%20autoencoder%20%28CAE%20%29%20in%20order%20to%20obtain%20quantized%20latent%20representations%20for%20maximizing%20the%20compression.%20We%20show%20that%20our%20approach%20yields%20superior%20performance%20in%20a%20classification%20downstream%20task%2C%20compared%20to%20traditional%20approaches%20like%20JPEG%2C%20while%20image%20quality%20metrics%20like%20the%20Multi-Scale%20Structural%20Similarity%20Index%20%28MS-SSIM%29%20is%20largely%20preserved.%20Our%20method%20is%20online%20available.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2406.12623%22%2C%22date%22%3A%222024-06-18%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2406.12623%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2406.12623%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A40%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22Y86EVCJF%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Floca%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-03%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFloca%2C%20R.%2C%20Bohn%2C%20J.%2C%20Haux%2C%20C.%2C%20Wiestler%2C%20B.%2C%20Z%26%23xF6%3Bllner%2C%20F.%20G.%2C%20Reinke%2C%20A.%2C%20Wei%26%23xDF%3B%2C%20J.%2C%20Nolden%2C%20M.%2C%20Albert%2C%20S.%2C%20Persigehl%2C%20T.%2C%20Norajitra%2C%20T.%2C%20Bae%26%23xDF%3Bler%2C%20B.%2C%20Dewey%2C%20M.%2C%20Braren%2C%20R.%2C%20B%26%23xFC%3Bchert%2C%20M.%2C%20Fallenberg%2C%20E.%20M.%2C%20Galldiks%2C%20N.%2C%20Gerken%2C%20A.%2C%20G%26%23xF6%3Btz%2C%20M.%2C%20%26%23x2026%3B%20Bamberg%2C%20F.%20%282024%29.%20%26lt%3Bb%26gt%3BRadiomics%20workflow%20definition%20%26amp%3B%20challenges%20-%20German%20priority%20program%202177%20consensus%20statement%20on%20clinically%20applied%20radiomics%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BInsights%20into%20Imaging%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B15%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%20124.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01704-w%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01704-w%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Radiomics%20workflow%20definition%20%26%20challenges%20-%20German%20priority%20program%202177%20consensus%20statement%20on%20clinically%20applied%20radiomics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ralf%22%2C%22lastName%22%3A%22Floca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonas%22%2C%22lastName%22%3A%22Bohn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christian%22%2C%22lastName%22%3A%22Haux%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benedikt%22%2C%22lastName%22%3A%22Wiestler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%20G.%22%2C%22lastName%22%3A%22Z%5Cu00f6llner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jakob%22%2C%22lastName%22%3A%22Wei%5Cu00df%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steffen%22%2C%22lastName%22%3A%22Albert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thorsten%22%2C%22lastName%22%3A%22Persigehl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Norajitra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bettina%22%2C%22lastName%22%3A%22Bae%5Cu00dfler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marc%22%2C%22lastName%22%3A%22Dewey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rickmer%22%2C%22lastName%22%3A%22Braren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22B%5Cu00fcchert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eva%20Maria%22%2C%22lastName%22%3A%22Fallenberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Norbert%22%2C%22lastName%22%3A%22Galldiks%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Gerken%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22G%5Cu00f6tz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Horst%20K.%22%2C%22lastName%22%3A%22Hahn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%22%2C%22lastName%22%3A%22Haubold%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Haueise%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nils%22%2C%22lastName%22%3A%22Gro%5Cu00dfe%20Hokamp%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Ingrisch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andra-Iza%22%2C%22lastName%22%3A%22Iuga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Janoschke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthias%22%2C%22lastName%22%3A%22Jung%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%20Sophie%22%2C%22lastName%22%3A%22Kiefer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Lohmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00fcrgen%22%2C%22lastName%22%3A%22Machann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%20Hendrik%22%2C%22lastName%22%3A%22Moltz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johanna%22%2C%22lastName%22%3A%22Nattenm%5Cu00fcller%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Nonnenmacher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benedict%22%2C%22lastName%22%3A%22Oerther%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%20E.%22%2C%22lastName%22%3A%22Othman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Felix%22%2C%22lastName%22%3A%22Peisen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fritz%22%2C%22lastName%22%3A%22Schick%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lale%22%2C%22lastName%22%3A%22Umutlu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Barbara%20D.%22%2C%22lastName%22%3A%22Wichtmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenzhao%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Svenja%22%2C%22lastName%22%3A%22Caspers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heinz-Peter%22%2C%22lastName%22%3A%22Schlemmer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%20L.%22%2C%22lastName%22%3A%22Schlett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Bamberg%22%7D%5D%2C%22abstractNote%22%3A%22Achieving%20a%20consensus%20on%20a%20definition%20for%20different%20aspects%20of%20radiomics%20workflows%20to%20support%20their%20translation%20into%20clinical%20usage.%20Furthermore%2C%20to%20assess%20the%20perspective%20of%20experts%20on%20important%20challenges%20for%20a%20successful%20clinical%20workflow%20implementation.%22%2C%22date%22%3A%222024-06-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1186%5C%2Fs13244-024-01704-w%22%2C%22ISSN%22%3A%221869-4101%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1186%5C%2Fs13244-024-01704-w%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A04%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22YEZZA9X5%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mais%20et%20al.%22%2C%22parsedDate%22%3A%222024-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMais%2C%20L.%2C%20Hirsch%2C%20P.%2C%20Managan%2C%20C.%2C%20Kandarpa%2C%20R.%2C%20Rumberger%2C%20J.%20L.%2C%20Reinke%2C%20A.%2C%20Maier-Hein%2C%20L.%2C%20Ihrke%2C%20G.%2C%20%26amp%3B%20Kainmueller%2C%20D.%20%282024%29.%20%26lt%3Bb%26gt%3BFISBe%3A%20A%20Real-World%20Benchmark%20Dataset%20for%20Instance%20Segmentation%20of%20Long-Range%20thin%20Filamentous%20Structures%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3B2024%20IEEE%5C%2FCVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%26lt%3B%5C%2Fi%26gt%3B%2C%2022249%26%23x2013%3B22259.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCVPR52733.2024.02100%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCVPR52733.2024.02100%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22FISBe%3A%20A%20Real-World%20Benchmark%20Dataset%20for%20Instance%20Segmentation%20of%20Long-Range%20thin%20Filamentous%20Structures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lisa%22%2C%22lastName%22%3A%22Mais%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Hirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claire%22%2C%22lastName%22%3A%22Managan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ramya%22%2C%22lastName%22%3A%22Kandarpa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%20Lorenz%22%2C%22lastName%22%3A%22Rumberger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gudrun%22%2C%22lastName%22%3A%22Ihrke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dagmar%22%2C%22lastName%22%3A%22Kainmueller%22%7D%5D%2C%22abstractNote%22%3A%22Instance%20segmentation%20of%20neurons%20in%20volumetric%20light%20microscopy%20images%20of%20nervous%20systems%20enables%20ground-breaking%20research%20in%20neuroscience%20by%20facilitating%20joint%20functional%20and%20morphological%20analyses%20of%20neural%20circuits%20at%20cel-lular%20resolution.%20Yet%20said%20multi-neuron%20light%20microscopy%20data%20exhibits%20extremely%20challenging%20properties%20for%20the%20task%20of%20instance%20segmentation%3A%20Individual%20neurons%20have%20long-ranging%2C%20thin%20filamentous%20and%20widely%20branching%20morpholo-gies%2C%20multiple%20neurons%20are%20tightly%20inter-weaved%2C%20and%20par-tial%20volume%20effects%2C%20uneven%20illumination%20and%20noise%20inherent%20to%20light%20microscopy%20severely%20impede%20local%20disentan-gling%20as%20well%20as%20long-range%20tracing%20of%20individual%20neurons.%20These%20properties%20reflect%20a%20current%20key%20challenge%20in%20machine%20learning%20research%2C%20namely%20to%20effectively%20capture%20long-range%20dependencies%20in%20the%20data.%20While%20respective%20method-ological%20research%20is%20buzzing%2C%20to%20date%20methods%20are%20typically%20benchmarked%20on%20synthetic%20datasets.%20To%20address%20this%20gap%2C%20we%20release%20the%20FlyLight%20Instance%20Segmentation%20Benchmark%20%28FISBe%29%20dataset%2C%20the%20first%20publicly%20available%20multi-neuron%20light%20microscopy%20dataset%20with%20pixel-wise%20annotations.%20In%20addition%2C%20we%20define%20a%20set%20of%20instance%20segmentation%20metrics%20for%20benchmarking%20that%20we%20designed%20to%20be%20meaningful%20with%20regard%20to%20downstream%20analyses.%20Lastly%2C%20we%20provide%20three%20baselines%20to%20kick%20off%20a%20competition%20that%20we%20envision%20to%20both%20advance%20the%20field%20of%20machine%20learning%20regarding%20methodology%20for%20capturing%20long-range%20data%20dependencies%2C%20and%20facilitate%20scientific%20discovery%20in%20basic%20neuroscience.%20Project%20page%3A%20https%3A%5C%2F%5C%2Fkainmueller-lab.github.io%5C%2Fjisbe.%22%2C%22date%22%3A%222024-06%22%2C%22proceedingsTitle%22%3A%222024%20IEEE%5C%2FCVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22conferenceName%22%3A%222024%20IEEE%5C%2FCVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FCVPR52733.2024.02100%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10656899%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A13%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22AHEH655Q%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20L.%2C%20Kumar%2C%20P.%2C%20Liu%2C%20Q.%2C%20Chen%2C%20P.%2C%20Li%2C%20C.%2C%20L%26%23xFC%3Bth%2C%20C.%2C%20Kr%26%23xE4%3Bmer%2C%20L.%2C%20Gabriel%2C%20C.%2C%20Jeridi%2C%20A.%2C%20Piraud%2C%20M.%2C%20Stoeger%2C%20T.%2C%20Staab-Weijnitz%2C%20C.%20a.%2C%20J%26%23xE4%3Bger%2C%20P.%2C%20Rehberg%2C%20M.%2C%20Isensee%2C%20F.%2C%20%26amp%3B%20Schmid%2C%20O.%20%282024%29.%20%26lt%3Bb%26gt%3BFresh%20Perspectives%20on%20Lung%20Morphometry%20and%20Pulmonary%20Drug%20Delivery%3A%20AI-powered%203D%20Imaging%20in%20Healthy%20and%20Fibrotic%20Murine%20Lungs%26lt%3B%5C%2Fb%26gt%3B.%20In%20%26lt%3Bi%26gt%3BB30.%20SCARRED%20FOR%20LIFE%3A%20TRANSLATIONAL%20RESEARCH%20IN%20INTERSTITIAL%20ABNORMALITIES%20AND%20LUNG%20FIBROSIS%26lt%3B%5C%2Fi%26gt%3B%20%28Vol.%201%26%23x2013%3B299%2C%20pp.%20A3242%26%23x2013%3BA3242%29.%20American%20Thoracic%20Society.%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1164%5C%2Fajrccm-conference.2024.209.1_MeetingAbstracts.A3242%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Fresh%20Perspectives%20on%20Lung%20Morphometry%20and%20Pulmonary%20Drug%20Delivery%3A%20AI-powered%203D%20Imaging%20in%20Healthy%20and%20Fibrotic%20Murine%20Lungs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22Kumar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Q.%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Kr%5Cu00e4mer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22Gabriel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Jeridi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Piraud%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%22%2C%22lastName%22%3A%22Stoeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.a.%22%2C%22lastName%22%3A%22Staab-Weijnitz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Rehberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F.%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22O.%22%2C%22lastName%22%3A%22Schmid%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22B30.%20SCARRED%20FOR%20LIFE%3A%20TRANSLATIONAL%20RESEARCH%20IN%20INTERSTITIAL%20ABNORMALITIES%20AND%20LUNG%20FIBROSIS%22%2C%22date%22%3A%222024-04-30%22%2C%22language%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.atsjournals.org%5C%2Fdoi%5C%2F10.1164%5C%2Fajrccm-conference.2024.209.1_MeetingAbstracts.A3242%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A07%3A19Z%22%7D%7D%2C%7B%22key%22%3A%228R6DSNVS%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zimmerer%20and%20Maier-Hein%22%2C%22parsedDate%22%3A%222024-04-27%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZimmerer%2C%20D.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%2C%20April%2027%29.%20%26lt%3Bb%26gt%3BRevisiting%20Anomaly%20Localization%20Metrics%26lt%3B%5C%2Fb%26gt%3B.%20Medical%20Imaging%20with%20Deep%20Learning.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DwGEDqSex3q%26amp%3BnoteId%3DwGEDqSex3q%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DwGEDqSex3q%26amp%3BnoteId%3DwGEDqSex3q%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Revisiting%20Anomaly%20Localization%20Metrics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Zimmerer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22An%20assumption-free%2C%20disease-agnostic%20pathology%20detector%20and%20segmentor%20could%20often%20be%20seen%20as%20one%20of%20the%20holy%20grails%20of%20medical%20image%20analysis.%20Building%20on%20this%20promise%2C%20un-%5C%2Fweakly%20supervised%20anomaly%20localization%20approaches%2C%20which%20aim%20to%20model%20normal%5C%2Fhealthy%20samples%20using%20data%20and%20then%20detect%20anything%20deviant%20from%20this%20%28i.e.%2C%20anything%20abnormal%29%2C%20have%20gained%20popularity.%20However%2C%20being%20an%20upcoming%20area%20in%20between%20image%20segmentation%20and%20out-of-distribution%20detection%2C%20most%20approaches%20have%20adapted%20their%20evaluation%20setup%20and%20metrics%20from%20either%20field%20and%20thus%20might%20have%20missed%20peculiarities%20inherent%20to%20the%20anomaly%20localization%20task.%20Here%2C%20we%20revisit%20the%20anomaly%20localization%20setup%2C%20discuss%20and%20analyse%20the%20properties%20of%20the%20often%20used%20metrics%2C%20show%20alternative%20metrics%20inspired%20from%20instance%20segmentation%20and%20compare%20the%20metrics%20across%20multiple%20setting%20and%20algorithms.%20Overall%2C%20we%20argue%20that%20the%20choice%20of%20the%20metric%20is%20use-case%20dependent%2C%20however%2C%20the%20Soft%20Instance%20IoU%20shows%20significant%20promise%20going%20forward.%22%2C%22date%22%3A%222024%5C%2F04%5C%2F27%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Medical%20Imaging%20with%20Deep%20Learning%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DwGEDqSex3q%26noteId%3DwGEDqSex3q%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A02%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22PHABE65C%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Denner%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-17%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDenner%2C%20S.%2C%20Zimmerer%2C%20D.%2C%20Bounias%2C%20D.%2C%20Bujotzek%2C%20M.%2C%20Xiao%2C%20S.%2C%20Kausch%2C%20L.%2C%20Schader%2C%20P.%2C%20Penzkofer%2C%20T.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BLeveraging%20Foundation%20Models%20for%20Content-Based%20Medical%20Image%20Retrieval%20in%20Radiology%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2403.06567%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2403.06567%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2403.06567%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Leveraging%20Foundation%20Models%20for%20Content-Based%20Medical%20Image%20Retrieval%20in%20Radiology%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Denner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Zimmerer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dimitrios%22%2C%22lastName%22%3A%22Bounias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Markus%22%2C%22lastName%22%3A%22Bujotzek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuhan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lisa%22%2C%22lastName%22%3A%22Kausch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Schader%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Penzkofer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Content-based%20image%20retrieval%20%28CBIR%29%20has%20the%20potential%20to%20significantly%20improve%20diagnostic%20aid%20and%20medical%20research%20in%20radiology.%20Current%20CBIR%20systems%20face%20limitations%20due%20to%20their%20specialization%20to%20certain%20pathologies%2C%20limiting%20their%20utility.%20In%20response%2C%20we%20propose%20using%20vision%20foundation%20models%20as%20powerful%20and%20versatile%20off-the-shelf%20feature%20extractors%20for%20content-based%20medical%20image%20retrieval.%20By%20benchmarking%20these%20models%20on%20a%20comprehensive%20dataset%20of%201.6%20million%202D%20radiological%20images%20spanning%20four%20modalities%20and%20161%20pathologies%2C%20we%20identify%20weakly-supervised%20models%20as%20superior%2C%20achieving%20a%20P%401%20of%20up%20to%200.594.%20This%20performance%20not%20only%20competes%20with%20a%20specialized%20model%20but%20does%20so%20without%20the%20need%20for%20fine-tuning.%20Our%20analysis%20further%20explores%20the%20challenges%20in%20retrieving%20pathological%20versus%20anatomical%20structures%2C%20indicating%20that%20accurate%20retrieval%20of%20pathological%20features%20presents%20greater%20difficulty.%20Despite%20these%20challenges%2C%20our%20research%20underscores%20the%20vast%20potential%20of%20foundation%20models%20for%20CBIR%20in%20radiology%2C%20proposing%20a%20shift%20towards%20versatile%2C%20general-purpose%20medical%20image%20retrieval%20systems%20that%20do%20not%20require%20specific%20tuning.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2403.06567%22%2C%22date%22%3A%222024-04-17%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2403.06567%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2403.06567%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T08%3A56%3A34Z%22%7D%7D%2C%7B%22key%22%3A%225B4AA934%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gotkowski%20et%20al.%22%2C%22parsedDate%22%3A%222024-03-19%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGotkowski%2C%20K.%2C%20L%26%23xFC%3Bth%2C%20C.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20Ziegler%2C%20S.%2C%20Kr%26%23xE4%3Bmer%2C%20L.%2C%20Denner%2C%20S.%2C%20Xiao%2C%20S.%2C%20Disch%2C%20N.%2C%20Maier-Hein%2C%20K.%20H.%2C%20%26amp%3B%20Isensee%2C%20F.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BEmbarrassingly%20Simple%20Scribble%20Supervision%20for%203D%20Medical%20Segmentation%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2403.12834%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2403.12834%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2403.12834%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Embarrassingly%20Simple%20Scribble%20Supervision%20for%203D%20Medical%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karol%22%2C%22lastName%22%3A%22Gotkowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Kr%5Cu00e4mer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Denner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuhan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nico%22%2C%22lastName%22%3A%22Disch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%5D%2C%22abstractNote%22%3A%22Traditionally%2C%20segmentation%20algorithms%20require%20dense%20annotations%20for%20training%2C%20demanding%20significant%20annotation%20efforts%2C%20particularly%20within%20the%203D%20medical%20imaging%20field.%20Scribble-supervised%20learning%20emerges%20as%20a%20possible%20solution%20to%20this%20challenge%2C%20promising%20a%20reduction%20in%20annotation%20efforts%20when%20creating%20large-scale%20datasets.%20Recently%2C%20a%20plethora%20of%20methods%20for%20optimized%20learning%20from%20scribbles%20have%20been%20proposed%2C%20but%20have%20so%20far%20failed%20to%20position%20scribble%20annotation%20as%20a%20beneficial%20alternative.%20We%20relate%20this%20shortcoming%20to%20two%20major%20issues%3A%201%29%20the%20complex%20nature%20of%20many%20methods%20which%20deeply%20ties%20them%20to%20the%20underlying%20segmentation%20model%2C%20thus%20preventing%20a%20migration%20to%20more%20powerful%20state-of-the-art%20models%20as%20the%20field%20progresses%20and%202%29%20the%20lack%20of%20a%20systematic%20evaluation%20to%20validate%20consistent%20performance%20across%20the%20broader%20medical%20domain%2C%20resulting%20in%20a%20lack%20of%20trust%20when%20applying%20these%20methods%20to%20new%20segmentation%20problems.%20To%20address%20these%20issues%2C%20we%20propose%20a%20comprehensive%20scribble%20supervision%20benchmark%20consisting%20of%20seven%20datasets%20covering%20a%20diverse%20set%20of%20anatomies%20and%20pathologies%20imaged%20with%20varying%20modalities.%20We%20furthermore%20propose%20the%20systematic%20use%20of%20partial%20losses%2C%20i.e.%20losses%20that%20are%20only%20computed%20on%20annotated%20voxels.%20Contrary%20to%20most%20existing%20methods%2C%20these%20losses%20can%20be%20seamlessly%20integrated%20into%20state-of-the-art%20segmentation%20methods%2C%20enabling%20them%20to%20learn%20from%20scribble%20annotations%20while%20preserving%20their%20original%20loss%20formulations.%20Our%20evaluation%20using%20nnU-Net%20reveals%20that%20while%20most%20existing%20methods%20suffer%20from%20a%20lack%20of%20generalization%2C%20the%20proposed%20approach%20consistently%20delivers%20state-of-the-art%20performance.%20Thanks%20to%20its%20simplicity%2C%20our%20approach%20presents%20an%20embarrassingly%20simple%20yet%20effective%20solution%20to%20the%20challenges%20of%20scribble%20supervision.%20Source%20code%20as%20well%20as%20our%20extensive%20scribble%20benchmarking%20suite%20will%20be%20made%20publicly%20available%20upon%20publication.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2403.12834%22%2C%22date%22%3A%222024-03-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2403.12834%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2403.12834%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A25%3A39Z%22%7D%7D%2C%7B%22key%22%3A%225ZQ8WVJ8%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Reinke%20et%20al.%22%2C%22parsedDate%22%3A%222024-02-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BReinke%2C%20A.%2C%20Tizabi%2C%20M.%20D.%2C%20Baumgartner%2C%20M.%2C%20Eisenmann%2C%20M.%2C%20Heckmann-N%26%23xF6%3Btzel%2C%20D.%2C%20Kavur%2C%20A.%20E.%2C%20R%26%23xE4%3Bdsch%2C%20T.%2C%20Sudre%2C%20C.%20H.%2C%20Acion%2C%20L.%2C%20Antonelli%2C%20M.%2C%20Arbel%2C%20T.%2C%20Bakas%2C%20S.%2C%20Benis%2C%20A.%2C%20Buettner%2C%20F.%2C%20Cardoso%2C%20M.%20J.%2C%20Cheplygina%2C%20V.%2C%20Chen%2C%20J.%2C%20Christodoulou%2C%20E.%2C%20Cimini%2C%20B.%20A.%2C%20%26%23x2026%3B%20Maier-Hein%2C%20L.%20%282024%29.%20%26lt%3Bb%26gt%3BUnderstanding%20metric-related%20pitfalls%20in%20image%20analysis%20validation%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BNature%20Methods%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B13.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41592-023-02150-0%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41592-023-02150-0%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Understanding%20metric-related%20pitfalls%20in%20image%20analysis%20validation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minu%20D.%22%2C%22lastName%22%3A%22Tizabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthias%22%2C%22lastName%22%3A%22Eisenmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Doreen%22%2C%22lastName%22%3A%22Heckmann-N%5Cu00f6tzel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Emre%22%2C%22lastName%22%3A%22Kavur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tim%22%2C%22lastName%22%3A%22R%5Cu00e4dsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carole%20H.%22%2C%22lastName%22%3A%22Sudre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Acion%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michela%22%2C%22lastName%22%3A%22Antonelli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tal%22%2C%22lastName%22%3A%22Arbel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Spyridon%22%2C%22lastName%22%3A%22Bakas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arriel%22%2C%22lastName%22%3A%22Benis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Buettner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Jorge%22%2C%22lastName%22%3A%22Cardoso%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Veronika%22%2C%22lastName%22%3A%22Cheplygina%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianxu%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Evangelia%22%2C%22lastName%22%3A%22Christodoulou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Beth%20A.%22%2C%22lastName%22%3A%22Cimini%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Keyvan%22%2C%22lastName%22%3A%22Farahani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luciana%22%2C%22lastName%22%3A%22Ferrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adrian%22%2C%22lastName%22%3A%22Galdran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bram%22%2C%22lastName%22%3A%22van%20Ginneken%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Glocker%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%22%2C%22lastName%22%3A%22Godau%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%20A.%22%2C%22lastName%22%3A%22Hashimoto%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20M.%22%2C%22lastName%22%3A%22Hoffman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Merel%22%2C%22lastName%22%3A%22Huisman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pierre%22%2C%22lastName%22%3A%22Jannin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Charles%20E.%22%2C%22lastName%22%3A%22Kahn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dagmar%22%2C%22lastName%22%3A%22Kainmueller%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bernhard%22%2C%22lastName%22%3A%22Kainz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexandros%22%2C%22lastName%22%3A%22Karargyris%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kleesiek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Kofler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thijs%22%2C%22lastName%22%3A%22Kooi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annette%22%2C%22lastName%22%3A%22Kopp-Schneider%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michal%22%2C%22lastName%22%3A%22Kozubek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Kreshuk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tahsin%22%2C%22lastName%22%3A%22Kurc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bennett%20A.%22%2C%22lastName%22%3A%22Landman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Geert%22%2C%22lastName%22%3A%22Litjens%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amin%22%2C%22lastName%22%3A%22Madani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anne%20L.%22%2C%22lastName%22%3A%22Martel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Erik%22%2C%22lastName%22%3A%22Meijering%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bjoern%22%2C%22lastName%22%3A%22Menze%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karel%20G.%20M.%22%2C%22lastName%22%3A%22Moons%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Henning%22%2C%22lastName%22%3A%22M%5Cu00fcller%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brennan%22%2C%22lastName%22%3A%22Nichyporuk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Felix%22%2C%22lastName%22%3A%22Nickel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Petersen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Susanne%20M.%22%2C%22lastName%22%3A%22Rafelski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nasir%22%2C%22lastName%22%3A%22Rajpoot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mauricio%22%2C%22lastName%22%3A%22Reyes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20A.%22%2C%22lastName%22%3A%22Riegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicola%22%2C%22lastName%22%3A%22Rieke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julio%22%2C%22lastName%22%3A%22Saez-Rodriguez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Clara%20I.%22%2C%22lastName%22%3A%22S%5Cu00e1nchez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shravya%22%2C%22lastName%22%3A%22Shetty%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ronald%20M.%22%2C%22lastName%22%3A%22Summers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdel%20A.%22%2C%22lastName%22%3A%22Taha%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aleksei%22%2C%22lastName%22%3A%22Tiulpin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sotirios%20A.%22%2C%22lastName%22%3A%22Tsaftaris%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Van%20Calster%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ga%5Cu00ebl%22%2C%22lastName%22%3A%22Varoquaux%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziv%20R.%22%2C%22lastName%22%3A%22Yaniv%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22Validation%20metrics%20are%20key%20for%20tracking%20scientific%20progress%20and%20bridging%20the%20current%20chasm%20between%20artificial%20intelligence%20research%20and%20its%20translation%20into%20practice.%20However%2C%20increasing%20evidence%20shows%20that%2C%20particularly%20in%20image%20analysis%2C%20metrics%20are%20often%20chosen%20inadequately.%20Although%20taking%20into%20account%20the%20individual%20strengths%2C%20weaknesses%20and%20limitations%20of%20validation%20metrics%20is%20a%20critical%20prerequisite%20to%20making%20educated%20choices%2C%20the%20relevant%20knowledge%20is%20currently%20scattered%20and%20poorly%20accessible%20to%20individual%20researchers.%20Based%20on%20a%20multistage%20Delphi%20process%20conducted%20by%20a%20multidisciplinary%20expert%20consortium%20as%20well%20as%20extensive%20community%20feedback%2C%20the%20present%20work%20provides%20a%20reliable%20and%20comprehensive%20common%20point%20of%20access%20to%20information%20on%20pitfalls%20related%20to%20validation%20metrics%20in%20image%20analysis.%20Although%20focused%20on%20biomedical%20image%20analysis%2C%20the%20addressed%20pitfalls%20generalize%20across%20application%20domains%20and%20are%20categorized%20according%20to%20a%20newly%20created%2C%20domain-agnostic%20taxonomy.%20The%20work%20serves%20to%20enhance%20global%20comprehension%20of%20a%20key%20topic%20in%20image%20analysis%20validation.%22%2C%22date%22%3A%222024-02-12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41592-023-02150-0%22%2C%22ISSN%22%3A%221548-7105%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41592-023-02150-0%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T08%3A36%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22KWZ58JMU%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Marinov%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-09%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMarinov%2C%20Z.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20Egger%2C%20J.%2C%20Kleesiek%2C%20J.%2C%20%26amp%3B%20Stiefelhagen%2C%20R.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BDeep%20Interactive%20Segmentation%20of%20Medical%20Images%3A%20A%20Systematic%20Review%20and%20Taxonomy%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2311.13964%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2311.13964%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2311.13964%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Deep%20Interactive%20Segmentation%20of%20Medical%20Images%3A%20A%20Systematic%20Review%20and%20Taxonomy%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zdravko%22%2C%22lastName%22%3A%22Marinov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22Egger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kleesiek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rainer%22%2C%22lastName%22%3A%22Stiefelhagen%22%7D%5D%2C%22abstractNote%22%3A%22Interactive%20segmentation%20is%20a%20crucial%20research%20area%20in%20medical%20image%20analysis%20aiming%20to%20boost%20the%20efficiency%20of%20costly%20annotations%20by%20incorporating%20human%20feedback.%20This%20feedback%20takes%20the%20form%20of%20clicks%2C%20scribbles%2C%20or%20masks%20and%20allows%20for%20iterative%20refinement%20of%20the%20model%20output%20so%20as%20to%20efficiently%20guide%20the%20system%20towards%20the%20desired%20behavior.%20In%20recent%20years%2C%20deep%20learning-based%20approaches%20have%20propelled%20results%20to%20a%20new%20level%20causing%20a%20rapid%20growth%20in%20the%20field%20with%20121%20methods%20proposed%20in%20the%20medical%20imaging%20domain%20alone.%20In%20this%20review%2C%20we%20provide%20a%20structured%20overview%20of%20this%20emerging%20field%20featuring%20a%20comprehensive%20taxonomy%2C%20a%20systematic%20review%20of%20existing%20methods%2C%20and%20an%20in-depth%20analysis%20of%20current%20practices.%20Based%20on%20these%20contributions%2C%20we%20discuss%20the%20challenges%20and%20opportunities%20in%20the%20field.%20For%20instance%2C%20we%20find%20that%20there%20is%20a%20severe%20lack%20of%20comparison%20across%20methods%20which%20needs%20to%20be%20tackled%20by%20standardized%20baselines%20and%20benchmarks.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2311.13964%22%2C%22date%22%3A%222024-01-09%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2311.13964%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2311.13964%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-04T12%3A44%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22XG53EIDF%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lamm%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-05%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLamm%2C%20L.%2C%20Zufferey%2C%20S.%2C%20Righetto%2C%20R.%20D.%2C%20Wietrzynski%2C%20W.%2C%20Yamauchi%2C%20K.%20A.%2C%20Burt%2C%20A.%2C%20Liu%2C%20Y.%2C%20Zhang%2C%20H.%2C%20Martinez-Sanchez%2C%20A.%2C%20Ziegler%2C%20S.%2C%20Isensee%2C%20F.%2C%20Schnabel%2C%20J.%20A.%2C%20Engel%2C%20B.%20D.%2C%20%26amp%3B%20Peng%2C%20T.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BMemBrain%20v2%3A%20an%20end-to-end%20tool%20for%20the%20analysis%20of%20membranes%20in%20cryo-electron%20tomography%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B.%20bioRxiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F2024.01.05.574336%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F2024.01.05.574336%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22MemBrain%20v2%3A%20an%20end-to-end%20tool%20for%20the%20analysis%20of%20membranes%20in%20cryo-electron%20tomography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Lamm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simon%22%2C%22lastName%22%3A%22Zufferey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ricardo%20D.%22%2C%22lastName%22%3A%22Righetto%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wojciech%22%2C%22lastName%22%3A%22Wietrzynski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%20A.%22%2C%22lastName%22%3A%22Yamauchi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alister%22%2C%22lastName%22%3A%22Burt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ye%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanyi%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Antonio%22%2C%22lastName%22%3A%22Martinez-Sanchez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julia%20A.%22%2C%22lastName%22%3A%22Schnabel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%20D.%22%2C%22lastName%22%3A%22Engel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tingying%22%2C%22lastName%22%3A%22Peng%22%7D%5D%2C%22abstractNote%22%3A%22MemBrain%20v2%20is%20a%20deep%20learning-enabled%20program%20aimed%20at%20the%20efficient%20analysis%20of%20membranes%20in%20cryo-electron%20tomography%20%28cryo-ET%29.%20The%20final%20v2%20release%20of%20MemBrain%20will%20comprise%20three%20main%20modules%3A%201%29%20MemBrain-seg%2C%20which%20provides%20automated%20membrane%20segmentation%2C%202%29%20MemBrain-pick%2C%20which%20provides%20automated%20picking%20of%20particles%20along%20segmented%20membranes%2C%20and%203%29%20MemBrain-stats%2C%20which%20provides%20quantitative%20statistics%20of%20particle%20distributions%20and%20membrane%20morphometrics.%5CnThis%20initial%20version%20of%20the%20manuscript%20is%20focused%20on%20the%20beta%20release%20of%20MemBrain-seg%2C%20which%20combines%20iterative%20training%20with%20diverse%20data%20and%20specialized%20Fourier-based%20data%20augmentations.%20These%20augmentations%20are%20specifically%20designed%20to%20enhance%20the%20tool%5Cu2019s%20adaptability%20to%20a%20variety%20of%20tomographic%20data%20and%20address%20common%20challenges%20in%20cryo-ET%20analysis.%20A%20key%20feature%20of%20MemBrain-seg%20is%20the%20implementation%20of%20the%20Surface-Dice%20loss%20function%2C%20which%20improves%20the%20network%5Cu2019s%20focus%20on%20membrane%20connectivity%20and%20allows%20for%20the%20effective%20incorporation%20of%20manual%20annotations%20from%20different%20sources.%20This%20function%20is%20beneficial%20in%20handling%20the%20variability%20inherent%20in%20membrane%20structures%20and%20annotations.%20Our%20ongoing%20collaboration%20with%20the%20cryo-ET%20community%20plays%20an%20important%20role%20in%20continually%20improving%20MemBrain%20v2%20with%20a%20wide%20array%20of%20training%20data.%20This%20collaborative%20approach%20ensures%20that%20MemBrain%20v2%20remains%20attuned%20to%20the%20field%5Cu2019s%20needs%2C%20enhancing%20its%20robustness%20and%20generalizability%20across%20different%20types%20of%20tomographic%20data.%5CnThe%20current%20version%20of%20MemBrain-seg%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fteamtomo%5C%2Fmembrainseg%2C%20and%20the%20predecessor%20of%20MemBrain-pick%20%28also%20called%20MemBrain%20v1%29%20is%20deposited%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FCellArchLab%5C%2FMemBrain.%20This%20preprint%20will%20be%20updated%20concomitantly%20with%20the%20code%20until%20the%20three%20integrated%20modules%20of%20MemBrain%20v2%20are%20complete.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22bioRxiv%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222024-01-05%22%2C%22DOI%22%3A%2210.1101%5C%2F2024.01.05.574336%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.biorxiv.org%5C%2Fcontent%5C%2F10.1101%5C%2F2024.01.05.574336v1%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-02-15T10%3A23%3A29Z%22%7D%7D%2C%7B%22key%22%3A%229RDRBGBX%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gotkowski%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGotkowski%2C%20K.%2C%20Gupta%2C%20S.%2C%20Godinho%2C%20J.%20R.%20A.%2C%20Tochtrop%2C%20C.%20G.%20S.%2C%20Maier-Hein%2C%20K.%20H.%2C%20%26amp%3B%20Isensee%2C%20F.%20%282024%29.%20%26lt%3Bb%26gt%3BParticleSeg3D%3A%20A%20scalable%20out-of-the-box%20deep%20learning%20segmentation%20solution%20for%20individual%20particle%20characterization%20from%20micro%20CT%20images%20in%20mineral%20processing%20and%20recycling%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BPowder%20Technology%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B434%26lt%3B%5C%2Fi%26gt%3B%2C%20119286.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.powtec.2023.119286%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.powtec.2023.119286%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22ParticleSeg3D%3A%20A%20scalable%20out-of-the-box%20deep%20learning%20segmentation%20solution%20for%20individual%20particle%20characterization%20from%20micro%20CT%20images%20in%20mineral%20processing%20and%20recycling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karol%22%2C%22lastName%22%3A%22Gotkowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuvam%22%2C%22lastName%22%3A%22Gupta%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jose%20R.A.%22%2C%22lastName%22%3A%22Godinho%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Camila%20G.S.%22%2C%22lastName%22%3A%22Tochtrop%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%2202%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.powtec.2023.119286%22%2C%22ISSN%22%3A%2200325910%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0032591023010690%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T11%3A48%3A17Z%22%7D%7D%2C%7B%22key%22%3A%226VM4VV5K%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bounias%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBounias%2C%20D.%2C%20Baumgartner%2C%20M.%2C%20Neher%2C%20P.%2C%20Kovacs%2C%20B.%2C%20Floca%2C%20R.%2C%20Jaeger%2C%20P.%20F.%2C%20Kapsner%2C%20L.%20A.%2C%20Eberle%2C%20J.%2C%20Hadler%2C%20D.%2C%20Laun%2C%20F.%2C%20Ohlmeyer%2C%20S.%2C%20Maier-Hein%2C%20K.%20H.%2C%20%26amp%3B%20Bickelhaupt%2C%20S.%20%282024%29.%20%26lt%3Bb%26gt%3BAbstract%3A%20Object%20Detection%20for%20Breast%20Diffusion-weighted%20Imaging%26lt%3B%5C%2Fb%26gt%3B.%20In%20A.%20Maier%2C%20T.%20M.%20Deserno%2C%20H.%20Handels%2C%20K.%20Maier-Hein%2C%20C.%20Palm%2C%20%26amp%3B%20T.%20Tolxdorff%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BBildverarbeitung%20f%26%23xFC%3Br%20die%20Medizin%202024%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%20334%26%23x2013%3B334%29.%20Springer%20Fachmedien%20Wiesbaden.%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-658-44037-4_84%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Abstract%3A%20Object%20Detection%20for%20Breast%20Diffusion-weighted%20Imaging%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Maier%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%20M.%22%2C%22lastName%22%3A%22Deserno%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Heinz%22%2C%22lastName%22%3A%22Handels%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Christoph%22%2C%22lastName%22%3A%22Palm%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Tolxdorff%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dimitrios%22%2C%22lastName%22%3A%22Bounias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Neher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Balint%22%2C%22lastName%22%3A%22Kovacs%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ralf%22%2C%22lastName%22%3A%22Floca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%20A.%22%2C%22lastName%22%3A%22Kapsner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jessica%22%2C%22lastName%22%3A%22Eberle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dominique%22%2C%22lastName%22%3A%22Hadler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frederik%22%2C%22lastName%22%3A%22Laun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sabine%22%2C%22lastName%22%3A%22Ohlmeyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Bickelhaupt%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22Bildverarbeitung%20f%5Cu00fcr%20die%20Medizin%202024%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-658-44036-7%20978-3-658-44037-4%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2F978-3-658-44037-4_84%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A37%3A50Z%22%7D%7D%2C%7B%22key%22%3A%227UBFAH7X%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bounias%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBounias%2C%20D.%2C%20Baumgartner%2C%20M.%2C%20Neher%2C%20P.%2C%20Kovacs%2C%20B.%2C%20Floca%2C%20R.%2C%20Jaeger%2C%20P.%20F.%2C%20Kapsner%2C%20L.%20A.%2C%20Eberle%2C%20J.%2C%20Hadler%2C%20D.%2C%20Laun%2C%20F.%2C%20Ohlmeyer%2C%20S.%2C%20Maier-Hein%2C%20K.%20H.%2C%20%26amp%3B%20Bickelhaupt%2C%20S.%20%282024%29.%20%26lt%3Bb%26gt%3BAbstract%3A%20Object%20Detection%20for%20Breast%20Diffusion-weighted%20Imaging%26lt%3B%5C%2Fb%26gt%3B.%20In%20A.%20Maier%2C%20T.%20M.%20Deserno%2C%20H.%20Handels%2C%20K.%20Maier-Hein%2C%20C.%20Palm%2C%20%26amp%3B%20T.%20Tolxdorff%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BBildverarbeitung%20f%26%23xFC%3Br%20die%20Medizin%202024%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%20334%26%23x2013%3B334%29.%20Springer%20Fachmedien.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-658-44037-4_84%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-658-44037-4_84%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Abstract%3A%20Object%20Detection%20for%20Breast%20Diffusion-weighted%20Imaging%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dimitrios%22%2C%22lastName%22%3A%22Bounias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Neher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Balint%22%2C%22lastName%22%3A%22Kovacs%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ralf%22%2C%22lastName%22%3A%22Floca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%20A.%22%2C%22lastName%22%3A%22Kapsner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jessica%22%2C%22lastName%22%3A%22Eberle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dominique%22%2C%22lastName%22%3A%22Hadler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frederik%22%2C%22lastName%22%3A%22Laun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sabine%22%2C%22lastName%22%3A%22Ohlmeyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Bickelhaupt%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Maier%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%20M.%22%2C%22lastName%22%3A%22Deserno%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Heinz%22%2C%22lastName%22%3A%22Handels%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Christoph%22%2C%22lastName%22%3A%22Palm%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Tolxdorff%22%7D%5D%2C%22abstractNote%22%3A%22Diffusion-weighted%20imaging%20%28DWI%29%20is%20a%20rapidly%20emerging%20unenhanced%20MRI%20technique%20in%20oncologic%20breast%20imaging.%20This%20IRB%20approved%20study%20included%20n%3D818%20patients%20%28with%20n%3D618%20malignant%20lesions%20in%20n%3D268%20patients%29.%20All%20patients%20underwent%20a%20clinically%20indicated%20multiparametric%20breast%203T%20MRI%20examination%2C%20including%20a%20multi-b-value%20DWI%20acquisition%20%2850%2C750%2C1500%29.%20We%20utilized%20nnDetection%2C%20a%20state-of-the-art%20self-configuring%20object%20detection%20model%2C%20with%20certain%20breast%20cancer-specific%20extensions%20to%20train%20a%20detection%20model.%20The%20model%20was%20trained%20with%20the%20following%20extensions%3A%20%28i%29%20apparent%20diffusion%20coefficient%20%28ADC%29%20as%20additional%20input%2C%20%28ii%29%20random%20bias%20field%2C%20random%20spike%2C%20and%20random%20ghosting%20augmentations%2C%20%28iii%29%20a%20size-balanced%20data%20loader%20to%20ensure%20that%20the%20fewer%20large%20lesions%20were%20given%20an%20equal%20chance%20to%20be%20picked%20in%20a%20mini-batch%20and%20%28iv%29%20replacement%20of%20the%20loss%20function%20with%20a%20size-adjusted%20focal%20loss%2C%20to%20prioritize%20finding%20primary%20lesions%20while%20disincentivizing%20small%20indeterminate%20false%20positives.%20The%20model%20was%20able%20to%20achieve%20an%20AUC%20of%200.88%20in%205-fold%20cross-validation%20using%20only%20the%20DWI%20acquisition%2C%20and%20compares%20favorably%20against%20multireader%20performance%20metrics%20reported%20for%20screening%20mammography%20in%20large%20studies%20in%20the%20literature%20%280.81%2C%200.87%2C%200.81%29.%20It%20also%20achieved%200.70%20FROC%20for%20primary%20lesions%2C%20indicating%20a%20relevant%20localization%20ability.%20This%20study%20shows%20that%20AI%20has%20the%20ability%20to%20complement%20breast%20cancer%20screening%20assessment%20in%20DWI-based%20examinations.%20This%20work%20was%20originally%20published%20at%20RSNA%202023%20%5B1%5D.%22%2C%22date%22%3A%222024%22%2C%22proceedingsTitle%22%3A%22Bildverarbeitung%20f%5Cu00fcr%20die%20Medizin%202024%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22de%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-658-44037-4_84%22%2C%22ISBN%22%3A%22978-3-658-44037-4%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A37%3A35Z%22%7D%7D%2C%7B%22key%22%3A%225EAFQUQZ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Almeida%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAlmeida%2C%20S.%20D.%2C%20Norajitra%2C%20T.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Wald%2C%20T.%2C%20Weru%2C%20V.%2C%20Nolden%2C%20M.%2C%20J%26%23xE4%3Bger%2C%20P.%20F.%2C%20von%20Stackelberg%2C%20O.%2C%20Heu%26%23xDF%3Bel%2C%20C.%20P.%2C%20Weinheimer%2C%20O.%2C%20Biederer%2C%20J.%2C%20Kauczor%2C%20H.-U.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bb%26gt%3BCapturing%20COPD%20heterogeneity%3A%20anomaly%20detection%20and%20parametric%20response%20mapping%20comparison%20for%20phenotyping%20on%20chest%20computed%20tomography%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BFrontiers%20in%20Medicine%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B11%26lt%3B%5C%2Fi%26gt%3B%2C%201360706.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3389%5C%2Ffmed.2024.1360706%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3389%5C%2Ffmed.2024.1360706%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Capturing%20COPD%20heterogeneity%3A%20anomaly%20detection%20and%20parametric%20response%20mapping%20comparison%20for%20phenotyping%20on%20chest%20computed%20tomography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%20D.%22%2C%22lastName%22%3A%22Almeida%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Norajitra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vivienn%22%2C%22lastName%22%3A%22Weru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Nolden%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oyunbileg%22%2C%22lastName%22%3A%22von%20Stackelberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claus%20Peter%22%2C%22lastName%22%3A%22Heu%5Cu00dfel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Weinheimer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00fcrgen%22%2C%22lastName%22%3A%22Biederer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hans-Ulrich%22%2C%22lastName%22%3A%22Kauczor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22BACKGROUND%3A%20Chronic%20obstructive%20pulmonary%20disease%20%28COPD%29%20poses%20a%20substantial%20global%20health%20burden%2C%20demanding%20advanced%20diagnostic%20tools%20for%20early%20detection%20and%20accurate%20phenotyping.%20In%20this%20line%2C%20this%20study%20seeks%20to%20enhance%20COPD%20characterization%20on%20chest%20computed%20tomography%20%28CT%29%20by%20comparing%20the%20spatial%20and%20quantitative%20relationships%20between%20traditional%20parametric%20response%20mapping%20%28PRM%29%20and%20a%20novel%20self-supervised%20anomaly%20detection%20approach%2C%20and%20to%20unveil%20potential%20additional%20insights%20into%20the%20dynamic%20transitional%20stages%20of%20COPD.%5CnMETHODS%3A%20Non-contrast%20inspiratory%20and%20expiratory%20CT%20of%201%2C310%20never-smoker%20and%20GOLD%200%20individuals%20and%20COPD%20patients%20%28GOLD%201-4%29%20from%20the%20COPDGene%20dataset%20were%20retrospectively%20evaluated.%20A%20novel%20self-supervised%20anomaly%20detection%20approach%20was%20applied%20to%20quantify%20lung%20abnormalities%20associated%20with%20COPD%2C%20as%20regional%20deviations.%20These%20regional%20anomaly%20scores%20were%20qualitatively%20and%20quantitatively%20compared%2C%20per%20GOLD%20class%2C%20to%20PRM%20volumes%20%28emphysema%3A%20PRMEmph%2C%20functional%20small-airway%20disease%3A%20PRMfSAD%29%20and%20to%20a%20Principal%20Component%20Analysis%20%28PCA%29%20and%20Clustering%2C%20applied%20on%20the%20self-supervised%20latent%20space.%20Its%20relationships%20to%20pulmonary%20function%20tests%20%28PFTs%29%20were%20also%20evaluated.%5CnRESULTS%3A%20Initial%20t-Distributed%20Stochastic%20Neighbor%20Embedding%20%28t-SNE%29%20visualization%20of%20the%20self-supervised%20latent%20space%20highlighted%20distinct%20spatial%20patterns%2C%20revealing%20clear%20separations%20between%20regions%20with%20and%20without%20emphysema%20and%20air%20trapping.%20Four%20stable%20clusters%20were%20identified%20among%20this%20latent%20space%20by%20the%20PCA%20and%20Cluster%20Analysis.%20As%20the%20GOLD%20stage%20increased%2C%20PRMEmph%2C%20PRMfSAD%2C%20anomaly%20score%2C%20and%20Cluster%203%20volumes%20exhibited%20escalating%20trends%2C%20contrasting%20with%20a%20decline%20in%20Cluster%202.%20The%20patient-wise%20anomaly%20scores%20significantly%20differed%20across%20GOLD%20stages%20%28p%5Cu2009%26lt%3B%5Cu20090.01%29%2C%20except%20for%20never-smokers%20and%20GOLD%200%20patients.%20In%20contrast%2C%20PRMEmph%2C%20PRMfSAD%2C%20and%20cluster%20classes%20showed%20fewer%20significant%20differences.%20Pearson%20correlation%20coefficients%20revealed%20moderate%20anomaly%20score%20correlations%20to%20PFTs%20%280.41-0.68%29%2C%20except%20for%20the%20functional%20residual%20capacity%20and%20smoking%20duration.%20The%20anomaly%20score%20was%20correlated%20with%20PRMEmph%20%28r%5Cu2009%3D%5Cu20090.66%2C%20p%5Cu2009%26lt%3B%5Cu20090.01%29%20and%20PRMfSAD%20%28r%5Cu2009%3D%5Cu20090.61%2C%20p%5Cu2009%26lt%3B%5Cu20090.01%29.%20Anomaly%20scores%20significantly%20improved%20fitting%20of%20PRM-adjusted%20multivariate%20models%20for%20predicting%20clinical%20parameters%20%28p%5Cu2009%26lt%3B%5Cu20090.001%29.%20Bland-Altman%20plots%20revealed%20that%20volume%20agreement%20between%20PRM-derived%20volumes%20and%20clusters%20was%20not%20constant%20across%20the%20range%20of%20measurements.%5CnCONCLUSION%3A%20Our%20study%20highlights%20the%20synergistic%20utility%20of%20the%20anomaly%20detection%20approach%20and%20traditional%20PRM%20in%20capturing%20the%20nuanced%20heterogeneity%20of%20COPD.%20The%20observed%20disparities%20in%20spatial%20patterns%2C%20cluster%20dynamics%2C%20and%20correlations%20with%20PFTs%20underscore%20the%20distinct%20-%20yet%20complementary%20-%20strengths%20of%20these%20methods.%20Integrating%20anomaly%20detection%20and%20PRM%20offers%20a%20promising%20avenue%20for%20understanding%20of%20COPD%20pathophysiology%2C%20potentially%20informing%20more%20tailored%20diagnostic%20and%20intervention%20approaches%20to%20improve%20patient%20outcomes.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%2210.3389%5C%2Ffmed.2024.1360706%22%2C%22ISSN%22%3A%222296-858X%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T13%3A25%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22PZU8TGVH%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Roy%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRoy%2C%20S.%2C%20Koehler%2C%20G.%2C%20Baumgartner%2C%20M.%2C%20Ulrich%2C%20C.%2C%20Isensee%2C%20F.%2C%20Jaeger%2C%20P.%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20%282024%29.%20%26lt%3Bb%26gt%3BAbstract%3A%203D%20Medical%20Image%20Segmentation%20with%20Transformer-based%20Scaling%20of%20ConvNets%26lt%3B%5C%2Fb%26gt%3B.%20In%20A.%20Maier%2C%20T.%20M.%20Deserno%2C%20H.%20Handels%2C%20K.%20Maier-Hein%2C%20C.%20Palm%2C%20%26amp%3B%20T.%20Tolxdorff%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BBildverarbeitung%20f%26%23xFC%3Br%20die%20Medizin%202024%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%2079%26%23x2013%3B79%29.%20Springer%20Fachmedien.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-658-44037-4_23%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-658-44037-4_23%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Abstract%3A%203D%20Medical%20Image%20Segmentation%20with%20Transformer-based%20Scaling%20of%20ConvNets%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saikat%22%2C%22lastName%22%3A%22Roy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregor%22%2C%22lastName%22%3A%22Koehler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Maier%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%20M.%22%2C%22lastName%22%3A%22Deserno%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Heinz%22%2C%22lastName%22%3A%22Handels%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Christoph%22%2C%22lastName%22%3A%22Palm%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Tolxdorff%22%7D%5D%2C%22abstractNote%22%3A%22Transformer-based%20architectures%20have%20seen%20widespread%20adoption%20recently%20for%20medical%20image%20segmentation.%20However%2C%20achieving%20performances%20equivalent%20to%20those%20in%20natural%20images%20are%20challenging%20due%20to%20the%20absence%20of%20large-scale%20annotated%20datasets.%20In%20contrast%2C%20convolutional%20networks%20have%20higher%20inductive%20biases%20and%20consequently%2C%20are%20easier%20to%20train%20to%20high%20performance.%20Recently%2C%20the%20ConvNeXt%20architecture%20attempted%20to%20improve%20the%20standard%20ConvNet%20by%20upgrading%20the%20popular%20ResNet%20blocks%20to%20mirror%20Transformer%20blocks.%20In%20this%20work%2C%20we%20extend%20upon%20this%20to%20design%20a%20modernized%20and%20scalable%20convolutional%20architecture%20customized%20to%20challenges%20of%20dense%20segmentation%20tasks%20in%20data-scarce%20medical%20settings.%20In%20this%20work%2C%20we%20introduce%20the%20MedNeXt%20architecture%20which%20is%20a%20Transformer-inspired%2C%20scalable%20large-kernel%20network%20for%20medical%20image%20segmentation%20with%204%20key%20features%20%5Cu2013%201%29%20Fully%20ConvNeXt%203D%20Encoder-Decoder%20architecture%20to%20leverage%20network-wide%20benefits%20of%20the%20block%20design%2C%202%29%20Residual%20ConvNeXt%20blocks%20for%20up%20and%20downsampling%20to%20preserve%20semantic%20richness%20across%20scales%2C%203%29%20Upkern%2C%20an%20algorithm%20to%20iteratively%20increase%20kernel%20size%20by%20upsampling%20small%20kernel%20networks%2C%20thus%20preventing%20performance%20saturation%20on%20limited%20data%2C%204%29%20Compound%20scaling%20of%20depth%2C%20width%20and%20kernel%20size%20to%20leverage%20the%20benefits%20of%20large-scale%20variants%20of%20the%20MedNeXt%20architecture.%20With%20state-of-the-art%20performance%20on%204%20popular%20segmentation%20tasks%2C%20across%20variations%20in%20imaging%20modalities%20%28CT%2C%20MRI%29%20and%20dataset%20sizes%2C%20MedNeXt%20represents%20a%20modernized%20deep%20architecture%20for%20medical%20image%20segmentation.%20This%20work%20was%20originally%20published%20in%20%5B1%5D.%20Our%20code%20is%20made%20publicly%20available%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FMIC-DKFZ%5C%2FMedNeXt.%22%2C%22date%22%3A%222024%22%2C%22proceedingsTitle%22%3A%22Bildverarbeitung%20f%5Cu00fcr%20die%20Medizin%202024%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22de%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-658-44037-4_23%22%2C%22ISBN%22%3A%22978-3-658-44037-4%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A44%3A27Z%22%7D%7D%2C%7B%22key%22%3A%222H5I46EV%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Traub%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTraub%2C%20J.%2C%20Bungert%2C%20T.%20J.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Baumgartner%2C%20M.%2C%20Maier-Hein%2C%20K.%20H.%2C%20Maier-Hein%2C%20L.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282024%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BOvercoming%20Common%20Flaws%20in%20the%20Evaluation%20of%20Selective%20Classification%20Systems%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FARXIV.2407.01032%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FARXIV.2407.01032%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Overcoming%20Common%20Flaws%20in%20the%20Evaluation%20of%20Selective%20Classification%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeremias%22%2C%22lastName%22%3A%22Traub%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Till%20J.%22%2C%22lastName%22%3A%22Bungert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22Selective%20Classification%2C%20wherein%20models%20can%20reject%20low-confidence%20predictions%2C%20promises%20reliable%20translation%20of%20machine-learning%20based%20classification%20systems%20to%20real-world%20scenarios%20such%20as%20clinical%20diagnostics.%20While%20current%20evaluation%20of%20these%20systems%20typically%20assumes%20fixed%20working%20points%20based%20on%20pre-defined%20rejection%20thresholds%2C%20methodological%20progress%20requires%20benchmarking%20the%20general%20performance%20of%20systems%20akin%20to%20the%20%24%5C%5Cmathrm%7BAUROC%7D%24%20in%20standard%20classification.%20In%20this%20work%2C%20we%20define%205%20requirements%20for%20multi-threshold%20metrics%20in%20selective%20classification%20regarding%20task%20alignment%2C%20interpretability%2C%20and%20flexibility%2C%20and%20show%20how%20current%20approaches%20fail%20to%20meet%20them.%20We%20propose%20the%20Area%20under%20the%20Generalized%20Risk%20Coverage%20curve%20%28%24%5C%5Cmathrm%7BAUGRC%7D%24%29%2C%20which%20meets%20all%20requirements%20and%20can%20be%20directly%20interpreted%20as%20the%20average%20risk%20of%20undetected%20failures.%20We%20empirically%20demonstrate%20the%20relevance%20of%20%24%5C%5Cmathrm%7BAUGRC%7D%24%20on%20a%20comprehensive%20benchmark%20spanning%206%20data%20sets%20and%2013%20confidence%20scoring%20functions.%20We%20find%20that%20the%20proposed%20metric%20substantially%20changes%20metric%20rankings%20on%205%20out%20of%20the%206%20data%20sets.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22DOI%22%3A%2210.48550%5C%2FARXIV.2407.01032%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2407.01032%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-10-24T07%3A37%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22K935T6J9%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Koehler%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKoehler%2C%20G.%2C%20Wald%2C%20T.%2C%20Ulrich%2C%20C.%2C%20Zimmerer%2C%20D.%2C%20Jaeger%2C%20P.%20F.%2C%20Franke%2C%20J.%20K.%2C%20Kohl%2C%20S.%2C%20Isensee%2C%20F.%2C%20%26amp%3B%20Maier-Hein%2C%20K.%20H.%20%282024%29.%20%26lt%3Bb%26gt%3BRecycleNet%3A%20Latent%20Feature%20Recycling%20Leads%20to%20Iterative%20Decision%20Refinement%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20IEEE%5C%2FCVF%20Winter%20Conference%20on%20Applications%20of%20Computer%20Vision%26lt%3B%5C%2Fi%26gt%3B%2C%20810%26%23x2013%3B818.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent%5C%2FWACV2024%5C%2Fhtml%5C%2FKohler_RecycleNet_Latent_Feature_Recycling_Leads_to_Iterative_Decision_Refinement_WACV_2024_paper.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent%5C%2FWACV2024%5C%2Fhtml%5C%2FKohler_RecycleNet_Latent_Feature_Recycling_Leads_to_Iterative_Decision_Refinement_WACV_2024_paper.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22RecycleNet%3A%20Latent%20Feature%20Recycling%20Leads%20to%20Iterative%20Decision%20Refinement%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregor%22%2C%22lastName%22%3A%22Koehler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Wald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Ulrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Zimmerer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00f6rg%20KH%22%2C%22lastName%22%3A%22Franke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simon%22%2C%22lastName%22%3A%22Kohl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%20H.%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20IEEE%5C%2FCVF%20Winter%20Conference%20on%20Applications%20of%20Computer%20Vision%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent%5C%2FWACV2024%5C%2Fhtml%5C%2FKohler_RecycleNet_Latent_Feature_Recycling_Leads_to_Iterative_Decision_Refinement_WACV_2024_paper.html%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-02-15T10%3A22%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22I6MWW2H4%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A9737705%2C%22username%22%3A%22Niebelsa%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fniebelsa%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Kahl%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKahl%2C%20K.-C.%2C%20L%26%23xFC%3Bth%2C%20C.%20T.%2C%20Zenk%2C%20M.%2C%20Maier-Hein%2C%20K.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282024%29.%20%26lt%3Bb%26gt%3BValUES%3A%20A%20Framework%20for%20Systematic%20Validation%20of%20Uncertainty%20Estimation%20in%20Semantic%20Segmentation%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FARXIV.2401.08501%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FARXIV.2401.08501%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22ValUES%3A%20A%20Framework%20for%20Systematic%20Validation%20of%20Uncertainty%20Estimation%20in%20Semantic%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kim-Celine%22%2C%22lastName%22%3A%22Kahl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Zenk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22Uncertainty%20estimation%20is%20an%20essential%20and%20heavily-studied%20component%20for%20the%20reliable%20application%20of%20semantic%20segmentation%20methods.%20While%20various%20studies%20exist%20claiming%20methodological%20advances%20on%20the%20one%20hand%2C%20and%20successful%20application%20on%20the%20other%20hand%2C%20the%20field%20is%20currently%20hampered%20by%20a%20gap%20between%20theory%20and%20practice%20leaving%20fundamental%20questions%20unanswered%3A%20Can%20data-related%20and%20model-related%20uncertainty%20really%20be%20separated%20in%20practice%3F%20Which%20components%20of%20an%20uncertainty%20method%20are%20essential%20for%20real-world%20performance%3F%20Which%20uncertainty%20method%20works%20well%20for%20which%20application%3F%20In%20this%20work%2C%20we%20link%20this%20research%20gap%20to%20a%20lack%20of%20systematic%20and%20comprehensive%20evaluation%20of%20uncertainty%20methods.%20Specifically%2C%20we%20identify%20three%20key%20pitfalls%20in%20current%20literature%20and%20present%20an%20evaluation%20framework%20that%20bridges%20the%20research%20gap%20by%20providing%201%29%20a%20controlled%20environment%20for%20studying%20data%20ambiguities%20as%20well%20as%20distribution%20shifts%2C%202%29%20systematic%20ablations%20of%20relevant%20method%20components%2C%20and%203%29%20test-beds%20for%20the%20five%20predominant%20uncertainty%20applications%3A%20OoD-detection%2C%20active%20learning%2C%20failure%20detection%2C%20calibration%2C%20and%20ambiguity%20modeling.%20Empirical%20results%20on%20simulated%20as%20well%20as%20real-world%20data%20demonstrate%20how%20the%20proposed%20framework%20is%20able%20to%20answer%20the%20predominant%20questions%20in%20the%20field%20revealing%20for%20instance%20that%201%29%20separation%20of%20uncertainty%20types%20works%20on%20simulated%20data%20but%20does%20not%20necessarily%20translate%20to%20real-world%20data%2C%202%29%20aggregation%20of%20scores%20is%20a%20crucial%20but%20currently%20neglected%20component%20of%20uncertainty%20methods%2C%203%29%20While%20ensembles%20are%20performing%20most%20robustly%20across%20the%20different%20downstream%20tasks%20and%20settings%2C%20test-time%20augmentation%20often%20constitutes%20a%20light-weight%20alternative.%20Code%20is%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FIML-DKFZ%5C%2Fvalues%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.48550%5C%2FARXIV.2401.08501%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2401.08501%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-02-15T10%3A17%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22IAZ2BWBR%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-12%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20J.%2C%20Zhou%2C%20Z.%2C%20Yang%2C%20J.%2C%20Pepe%2C%20A.%2C%20Gsaxner%2C%20C.%2C%20Luijten%2C%20G.%2C%20Qu%2C%20C.%2C%20Zhang%2C%20T.%2C%20Chen%2C%20X.%2C%20Li%2C%20W.%2C%20Wodzinski%2C%20M.%2C%20Friedrich%2C%20P.%2C%20Xie%2C%20K.%2C%20Jin%2C%20Y.%2C%20Ambigapathy%2C%20N.%2C%20Nasca%2C%20E.%2C%20Solak%2C%20N.%2C%20Melito%2C%20G.%20M.%2C%20Vu%2C%20V.%20D.%2C%20%26%23x2026%3B%20Egger%2C%20J.%20%282023%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BMedShapeNet%20--%20A%20Large-Scale%20Dataset%20of%203D%20Medical%20Shapes%20for%20Computer%20Vision%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2308.16139%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2308.16139%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2308.16139%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22MedShapeNet%20--%20A%20Large-Scale%20Dataset%20of%203D%20Medical%20Shapes%20for%20Computer%20Vision%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianning%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zongwei%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiancheng%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Antonio%22%2C%22lastName%22%3A%22Pepe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christina%22%2C%22lastName%22%3A%22Gsaxner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gijs%22%2C%22lastName%22%3A%22Luijten%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chongyu%22%2C%22lastName%22%3A%22Qu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tiezheng%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoxi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenxuan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marek%22%2C%22lastName%22%3A%22Wodzinski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%22%2C%22lastName%22%3A%22Friedrich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kangxian%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuan%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Narmada%22%2C%22lastName%22%3A%22Ambigapathy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Enrico%22%2C%22lastName%22%3A%22Nasca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Naida%22%2C%22lastName%22%3A%22Solak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gian%20Marco%22%2C%22lastName%22%3A%22Melito%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Viet%20Duc%22%2C%22lastName%22%3A%22Vu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Afaque%20R.%22%2C%22lastName%22%3A%22Memon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%22%2C%22lastName%22%3A%22Schlachta%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandrine%20De%22%2C%22lastName%22%3A%22Ribaupierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rajnikant%22%2C%22lastName%22%3A%22Patel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Roy%22%2C%22lastName%22%3A%22Eagleson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaojun%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heinrich%22%2C%22lastName%22%3A%22M%5Cu00e4chler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%20Stefan%22%2C%22lastName%22%3A%22Kirschke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ezequiel%20de%20la%22%2C%22lastName%22%3A%22Rosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%20Ferdinand%22%2C%22lastName%22%3A%22Christ%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongwei%20Bran%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%20G.%22%2C%22lastName%22%3A%22Ellis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michele%20R.%22%2C%22lastName%22%3A%22Aizenberg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sergios%22%2C%22lastName%22%3A%22Gatidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22K%5Cu00fcstner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nadya%22%2C%22lastName%22%3A%22Shusharina%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicholas%22%2C%22lastName%22%3A%22Heller%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%22%2C%22lastName%22%3A%22Andrearczyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adrien%22%2C%22lastName%22%3A%22Depeursinge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mathieu%22%2C%22lastName%22%3A%22Hatt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anjany%22%2C%22lastName%22%3A%22Sekuboyina%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22L%5Cu00f6ffler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hans%22%2C%22lastName%22%3A%22Liebl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Reuben%22%2C%22lastName%22%3A%22Dorent%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tom%22%2C%22lastName%22%3A%22Vercauteren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonathan%22%2C%22lastName%22%3A%22Shapey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%22%2C%22lastName%22%3A%22Kujawa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Cornelissen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%22%2C%22lastName%22%3A%22Langenhuizen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Achraf%22%2C%22lastName%22%3A%22Ben-Hamadou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%22%2C%22lastName%22%3A%22Rekik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sergi%22%2C%22lastName%22%3A%22Pujades%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edmond%22%2C%22lastName%22%3A%22Boyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Federico%22%2C%22lastName%22%3A%22Bolelli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Costantino%22%2C%22lastName%22%3A%22Grana%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luca%22%2C%22lastName%22%3A%22Lumetti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamidreza%22%2C%22lastName%22%3A%22Salehi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ramtin%22%2C%22lastName%22%3A%22Gharleghi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Susann%22%2C%22lastName%22%3A%22Beier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arcot%22%2C%22lastName%22%3A%22Sowmya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eduardo%20A.%22%2C%22lastName%22%3A%22Garza-Villarreal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thania%22%2C%22lastName%22%3A%22Balducci%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Diego%22%2C%22lastName%22%3A%22Angeles-Valdez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Roberto%22%2C%22lastName%22%3A%22Souza%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leticia%22%2C%22lastName%22%3A%22Rittner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Frayne%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuanfeng%22%2C%22lastName%22%3A%22Ji%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincenzo%22%2C%22lastName%22%3A%22Ferrari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soumick%22%2C%22lastName%22%3A%22Chatterjee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Dubost%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefanie%22%2C%22lastName%22%3A%22Schreiber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hendrik%22%2C%22lastName%22%3A%22Mattern%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Speck%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Haehn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christoph%22%2C%22lastName%22%3A%22John%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22N%5Cu00fcrnberger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jo%5Cu00e3o%22%2C%22lastName%22%3A%22Pedrosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carlos%22%2C%22lastName%22%3A%22Ferreira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guilherme%22%2C%22lastName%22%3A%22Aresta%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ant%5Cu00f3nio%22%2C%22lastName%22%3A%22Cunha%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aur%5Cu00e9lio%22%2C%22lastName%22%3A%22Campilho%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yannick%22%2C%22lastName%22%3A%22Suter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jose%22%2C%22lastName%22%3A%22Garcia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alain%22%2C%22lastName%22%3A%22Lalande%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vicky%22%2C%22lastName%22%3A%22Vandenbossche%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aline%20Van%22%2C%22lastName%22%3A%22Oevelen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kate%22%2C%22lastName%22%3A%22Duquesne%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamza%22%2C%22lastName%22%3A%22Mekhzoum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jef%22%2C%22lastName%22%3A%22Vandemeulebroucke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Emmanuel%22%2C%22lastName%22%3A%22Audenaert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claudia%22%2C%22lastName%22%3A%22Krebs%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timo%20van%22%2C%22lastName%22%3A%22Leeuwen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Evie%22%2C%22lastName%22%3A%22Vereecke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hauke%22%2C%22lastName%22%3A%22Heidemeyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rainer%22%2C%22lastName%22%3A%22R%5Cu00f6hrig%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%22%2C%22lastName%22%3A%22H%5Cu00f6lzle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vahid%22%2C%22lastName%22%3A%22Badeli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kathrin%22%2C%22lastName%22%3A%22Krieger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthias%22%2C%22lastName%22%3A%22Gunzer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianxu%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timo%20van%22%2C%22lastName%22%3A%22Meegdenburg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amin%22%2C%22lastName%22%3A%22Dada%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Miriam%22%2C%22lastName%22%3A%22Balzer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jana%22%2C%22lastName%22%3A%22Fragemann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frederic%22%2C%22lastName%22%3A%22Jonske%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moritz%22%2C%22lastName%22%3A%22Rempe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stanislav%22%2C%22lastName%22%3A%22Malorodov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fin%20H.%22%2C%22lastName%22%3A%22Bahnsen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantin%22%2C%22lastName%22%3A%22Seibold%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Jaus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zdravko%22%2C%22lastName%22%3A%22Marinov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rainer%22%2C%22lastName%22%3A%22Stiefelhagen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ana%20Sofia%22%2C%22lastName%22%3A%22Santos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mariana%22%2C%22lastName%22%3A%22Lindo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andr%5Cu00e9%22%2C%22lastName%22%3A%22Ferreira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Victor%22%2C%22lastName%22%3A%22Alves%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Kamp%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amr%22%2C%22lastName%22%3A%22Abourayya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Felix%22%2C%22lastName%22%3A%22Nensa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22H%5Cu00f6rst%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Brehmer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Heine%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yannik%22%2C%22lastName%22%3A%22Hanusrichter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22We%5Cu00dfling%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marcel%22%2C%22lastName%22%3A%22Dudda%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%20E.%22%2C%22lastName%22%3A%22Podleska%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthias%20A.%22%2C%22lastName%22%3A%22Fink%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julius%22%2C%22lastName%22%3A%22Keyl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konstantinos%22%2C%22lastName%22%3A%22Tserpes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moon-Sung%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shireen%22%2C%22lastName%22%3A%22Elhabian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hans%22%2C%22lastName%22%3A%22Lamecker%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D%5Cu017eenan%22%2C%22lastName%22%3A%22Zuki%5Cu0107%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Beatriz%22%2C%22lastName%22%3A%22Paniagua%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christian%22%2C%22lastName%22%3A%22Wachinger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Urschler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luc%22%2C%22lastName%22%3A%22Duong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jakob%22%2C%22lastName%22%3A%22Wasserthal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%20F.%22%2C%22lastName%22%3A%22Hoyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%22%2C%22lastName%22%3A%22Basu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Maal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Max%20J.%20H.%22%2C%22lastName%22%3A%22Witjes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregor%22%2C%22lastName%22%3A%22Schiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ti-chiun%22%2C%22lastName%22%3A%22Chang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Seyed-Ahmad%22%2C%22lastName%22%3A%22Ahmadi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ping%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bjoern%22%2C%22lastName%22%3A%22Menze%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mauricio%22%2C%22lastName%22%3A%22Reyes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%20M.%22%2C%22lastName%22%3A%22Deserno%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christos%22%2C%22lastName%22%3A%22Davatzikos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Behrus%22%2C%22lastName%22%3A%22Puladi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pascal%22%2C%22lastName%22%3A%22Fua%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alan%20L.%22%2C%22lastName%22%3A%22Yuille%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kleesiek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22Egger%22%7D%5D%2C%22abstractNote%22%3A%22Prior%20to%20the%20deep%20learning%20era%2C%20shape%20was%20commonly%20used%20to%20describe%20the%20objects.%20Nowadays%2C%20state-of-the-art%20%28SOTA%29%20algorithms%20in%20medical%20imaging%20are%20predominantly%20diverging%20from%20computer%20vision%2C%20where%20voxel%20grids%2C%20meshes%2C%20point%20clouds%2C%20and%20implicit%20surface%20models%20are%20used.%20This%20is%20seen%20from%20numerous%20shape-related%20publications%20in%20premier%20vision%20conferences%20as%20well%20as%20the%20growing%20popularity%20of%20ShapeNet%20%28about%2051%2C300%20models%29%20and%20Princeton%20ModelNet%20%28127%2C915%20models%29.%20For%20the%20medical%20domain%2C%20we%20present%20a%20large%20collection%20of%20anatomical%20shapes%20%28e.g.%2C%20bones%2C%20organs%2C%20vessels%29%20and%203D%20models%20of%20surgical%20instrument%2C%20called%20MedShapeNet%2C%20created%20to%20facilitate%20the%20translation%20of%20data-driven%20vision%20algorithms%20to%20medical%20applications%20and%20to%20adapt%20SOTA%20vision%20algorithms%20to%20medical%20problems.%20As%20a%20unique%20feature%2C%20we%20directly%20model%20the%20majority%20of%20shapes%20on%20the%20imaging%20data%20of%20real%20patients.%20As%20of%20today%2C%20MedShapeNet%20includes%2023%20dataset%20with%20more%20than%20100%2C000%20shapes%20that%20are%20paired%20with%20annotations%20%28ground%20truth%29.%20Our%20data%20is%20freely%20accessible%20via%20a%20web%20interface%20and%20a%20Python%20application%20programming%20interface%20%28API%29%20and%20can%20be%20used%20for%20discriminative%2C%20reconstructive%2C%20and%20variational%20benchmarks%20as%20well%20as%20various%20applications%20in%20virtual%2C%20augmented%2C%20or%20mixed%20reality%2C%20and%203D%20printing.%20Exemplary%2C%20we%20present%20use%20cases%20in%20the%20fields%20of%20classification%20of%20brain%20tumors%2C%20facial%20and%20skull%20reconstructions%2C%20multi-class%20anatomy%20completion%2C%20education%2C%20and%203D%20printing.%20In%20future%2C%20we%20will%20extend%20the%20data%20and%20improve%20the%20interfaces.%20The%20project%20pages%20are%3A%20https%3A%5C%2F%5C%2Fmedshapenet.ikim.nrw%5C%2F%20and%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FJianningli%5C%2Fmedshapenet-feedback%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2308.16139%22%2C%22date%22%3A%222023-12-12%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2308.16139%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2308.16139%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T12%3A20%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22PNZTU9BB%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22L%5Cu00fcth%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-03%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BL%26%23xFC%3Bth%2C%20C.%20T.%2C%20Bungert%2C%20T.%20J.%2C%20Klein%2C%20L.%2C%20%26amp%3B%20Jaeger%2C%20P.%20F.%20%282023%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BNavigating%20the%20Pitfalls%20of%20Active%20Learning%20Evaluation%3A%20A%20Systematic%20Framework%20for%20Meaningful%20Performance%20Assessment%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2301.10625%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2301.10625%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2301.10625%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Navigating%20the%20Pitfalls%20of%20Active%20Learning%20Evaluation%3A%20A%20Systematic%20Framework%20for%20Meaningful%20Performance%20Assessment%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%20T.%22%2C%22lastName%22%3A%22L%5Cu00fcth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Till%20J.%22%2C%22lastName%22%3A%22Bungert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%5D%2C%22abstractNote%22%3A%22Active%20Learning%20%28AL%29%20aims%20to%20reduce%20the%20labeling%20burden%20by%20interactively%20selecting%20the%20most%20informative%20samples%20from%20a%20pool%20of%20unlabeled%20data.%20While%20there%20has%20been%20extensive%20research%20on%20improving%20AL%20query%20methods%20in%20recent%20years%2C%20some%20studies%20have%20questioned%20the%20effectiveness%20of%20AL%20compared%20to%20emerging%20paradigms%20such%20as%20semi-supervised%20%28Semi-SL%29%20and%20self-supervised%20learning%20%28Self-SL%29%2C%20or%20a%20simple%20optimization%20of%20classifier%20configurations.%20Thus%2C%20today%26%23039%3Bs%20AL%20literature%20presents%20an%20inconsistent%20and%20contradictory%20landscape%2C%20leaving%20practitioners%20uncertain%20about%20whether%20and%20how%20to%20use%20AL%20in%20their%20tasks.%20In%20this%20work%2C%20we%20make%20the%20case%20that%20this%20inconsistency%20arises%20from%20a%20lack%20of%20systematic%20and%20realistic%20evaluation%20of%20AL%20methods.%20Specifically%2C%20we%20identify%20five%20key%20pitfalls%20in%20the%20current%20literature%20that%20reflect%20the%20delicate%20considerations%20required%20for%20AL%20evaluation.%20Further%2C%20we%20present%20an%20evaluation%20framework%20that%20overcomes%20these%20pitfalls%20and%20thus%20enables%20meaningful%20statements%20about%20the%20performance%20of%20AL%20methods.%20To%20demonstrate%20the%20relevance%20of%20our%20protocol%2C%20we%20present%20a%20large-scale%20empirical%20study%20and%20benchmark%20for%20image%20classification%20spanning%20various%20data%20sets%2C%20query%20methods%2C%20AL%20settings%2C%20and%20training%20paradigms.%20Our%20findings%20clarify%20the%20inconsistent%20picture%20in%20the%20literature%20and%20enable%20us%20to%20give%20hands-on%20recommendations%20for%20practitioners.%20The%20benchmark%20is%20hosted%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FIML-DKFZ%5C%2Frealistic-al%20.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2301.10625%22%2C%22date%22%3A%222023-11-03%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2301.10625%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2301.10625%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-04T14%3A20%3A14Z%22%7D%7D%2C%7B%22key%22%3A%224JYLYRHF%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Brandenburg%20et%20al.%22%2C%22parsedDate%22%3A%222023-11%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBrandenburg%2C%20J.%20M.%2C%20Jenke%2C%20A.%20C.%2C%20Stern%2C%20A.%2C%20Daum%2C%20M.%20T.%20J.%2C%20Schulze%2C%20A.%2C%20Younis%2C%20R.%2C%20Petrynowski%2C%20P.%2C%20Davitashvili%2C%20T.%2C%20Vanat%2C%20V.%2C%20Bhasker%2C%20N.%2C%20Schneider%2C%20S.%2C%20M%26%23xFC%3Bndermann%2C%20L.%2C%20Reinke%2C%20A.%2C%20Kolbinger%2C%20F.%20R.%2C%20J%26%23xF6%3Brns%2C%20V.%2C%20Fritz-Kebede%2C%20F.%2C%20Dugas%2C%20M.%2C%20Maier-Hein%2C%20L.%2C%20Klotz%2C%20R.%2C%20%26%23x2026%3B%20Wagner%2C%20M.%20%282023%29.%20%26lt%3Bb%26gt%3BActive%20learning%20for%20extracting%20surgomic%20features%20in%20robot-assisted%20minimally%20invasive%20esophagectomy%3A%20a%20prospective%20annotation%20study%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BSurgical%20Endoscopy%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B37%26lt%3B%5C%2Fi%26gt%3B%2811%29%2C%208577%26%23x2013%3B8593.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00464-023-10447-6%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00464-023-10447-6%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Active%20learning%20for%20extracting%20surgomic%20features%20in%20robot-assisted%20minimally%20invasive%20esophagectomy%3A%20a%20prospective%20annotation%20study%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johanna%20M.%22%2C%22lastName%22%3A%22Brandenburg%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%20C.%22%2C%22lastName%22%3A%22Jenke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Antonia%22%2C%22lastName%22%3A%22Stern%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marie%20T.%20J.%22%2C%22lastName%22%3A%22Daum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andr%5Cu00e9%22%2C%22lastName%22%3A%22Schulze%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rayan%22%2C%22lastName%22%3A%22Younis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Petrynowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tornike%22%2C%22lastName%22%3A%22Davitashvili%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%22%2C%22lastName%22%3A%22Vanat%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nithya%22%2C%22lastName%22%3A%22Bhasker%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sophia%22%2C%22lastName%22%3A%22Schneider%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22M%5Cu00fcndermann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Reinke%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fiona%20R.%22%2C%22lastName%22%3A%22Kolbinger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vanessa%22%2C%22lastName%22%3A%22J%5Cu00f6rns%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fleur%22%2C%22lastName%22%3A%22Fritz-Kebede%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Dugas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rosa%22%2C%22lastName%22%3A%22Klotz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marius%22%2C%22lastName%22%3A%22Distler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00fcrgen%22%2C%22lastName%22%3A%22Weitz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Beat%20P.%22%2C%22lastName%22%3A%22M%5Cu00fcller-Stich%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefanie%22%2C%22lastName%22%3A%22Speidel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Bodenstedt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Wagner%22%7D%5D%2C%22abstractNote%22%3A%22BACKGROUND%3A%20With%20Surgomics%2C%20we%20aim%20for%20personalized%20prediction%20of%20the%20patient%26%23039%3Bs%20surgical%20outcome%20using%20machine-learning%20%28ML%29%20on%20multimodal%20intraoperative%20data%20to%20extract%20surgomic%20features%20as%20surgical%20process%20characteristics.%20As%20high-quality%20annotations%20by%20medical%20experts%20are%20crucial%2C%20but%20still%20a%20bottleneck%2C%20we%20prospectively%20investigate%20active%20learning%20%28AL%29%20to%20reduce%20annotation%20effort%20and%20present%20automatic%20recognition%20of%20surgomic%20features.%5CnMETHODS%3A%20To%20establish%20a%20process%20for%20development%20of%20surgomic%20features%2C%20ten%20video-based%20features%20related%20to%20bleeding%2C%20as%20highly%20relevant%20intraoperative%20complication%2C%20were%20chosen.%20They%20comprise%20the%20amount%20of%20blood%20and%20smoke%20in%20the%20surgical%20field%2C%20six%20instruments%2C%20and%20two%20anatomic%20structures.%20Annotation%20of%20selected%20frames%20from%20robot-assisted%20minimally%20invasive%20esophagectomies%20was%20performed%20by%20at%20least%20three%20independent%20medical%20experts.%20To%20test%20whether%20AL%20reduces%20annotation%20effort%2C%20we%20performed%20a%20prospective%20annotation%20study%20comparing%20AL%20with%20equidistant%20sampling%20%28EQS%29%20for%20frame%20selection.%20Multiple%20Bayesian%20ResNet18%20architectures%20were%20trained%20on%20a%20multicentric%20dataset%2C%20consisting%20of%2022%20videos%20from%20two%20centers.%5CnRESULTS%3A%20In%20total%2C%2014%2C004%20frames%20were%20tag%20annotated.%20A%20mean%20F1-score%20of%200.75%5Cu2009%5Cu00b1%5Cu20090.16%20was%20achieved%20for%20all%20features.%20The%20highest%20F1-score%20was%20achieved%20for%20the%20instruments%20%28mean%200.80%5Cu2009%5Cu00b1%5Cu20090.17%29.%20This%20result%20is%20also%20reflected%20in%20the%20inter-rater-agreement%20%281-rater-kappa%5Cu2009%26gt%3B%5Cu20090.82%29.%20Compared%20to%20EQS%2C%20AL%20showed%20better%20recognition%20results%20for%20the%20instruments%20with%20a%20significant%20difference%20in%20the%20McNemar%20test%20comparing%20correctness%20of%20predictions.%20Moreover%2C%20in%20contrast%20to%20EQS%2C%20AL%20selected%20more%20frames%20of%20the%20four%20less%20common%20instruments%20%281512%20vs.%20607%20frames%29%20and%20achieved%20higher%20F1-scores%20for%20common%20instruments%20while%20requiring%20less%20training%20frames.%5CnCONCLUSION%3A%20We%20presented%20ten%20surgomic%20features%20relevant%20for%20bleeding%20events%20in%20esophageal%20surgery%20automatically%20extracted%20from%20surgical%20video%20using%20ML.%20AL%20showed%20the%20potential%20to%20reduce%20annotation%20effort%20while%20keeping%20ML%20performance%20high%20for%20selected%20features.%20The%20source%20code%20and%20the%20trained%20models%20are%20published%5Cu00a0open%20source.%22%2C%22date%22%3A%222023-11%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%2210.1007%5C%2Fs00464-023-10447-6%22%2C%22ISSN%22%3A%221432-2218%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-04T13%3A52%3A11Z%22%7D%7D%2C%7B%22key%22%3A%226JKKIDVQ%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klein%20et%20al.%22%2C%22parsedDate%22%3A%222023-10-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlein%2C%20L.%2C%20Ziegler%2C%20S.%2C%20Laufer%2C%20F.%2C%20Debus%2C%20C.%2C%20G%26%23xF6%3Btz%2C%20M.%2C%20Maier%26%23x2010%3BHein%2C%20K.%2C%20Paetzold%2C%20U.%20W.%2C%20Isensee%2C%20F.%2C%20%26amp%3B%20J%26%23xE4%3Bger%2C%20P.%20F.%20%282023%29.%20%26lt%3Bb%26gt%3BDiscovering%20Process%20Dynamics%20for%20Scalable%20Perovskite%20Solar%20Cell%20Manufacturing%20with%20Explainable%20AI%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BAdvanced%20Materials%26lt%3B%5C%2Fi%26gt%3B%2C%202307160.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1002%5C%2Fadma.202307160%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1002%5C%2Fadma.202307160%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Discovering%20Process%20Dynamics%20for%20Scalable%20Perovskite%20Solar%20Cell%20Manufacturing%20with%20Explainable%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Klein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Ziegler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Felix%22%2C%22lastName%22%3A%22Laufer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Charlotte%22%2C%22lastName%22%3A%22Debus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Markus%22%2C%22lastName%22%3A%22G%5Cu00f6tz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier%5Cu2010Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ulrich%20W.%22%2C%22lastName%22%3A%22Paetzold%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22J%5Cu00e4ger%22%7D%5D%2C%22abstractNote%22%3A%22Abstract%5Cn%20%20%20%20%20%20%20%20%20%20%20%20Large%5Cu2010area%20processing%20of%20perovskite%20semiconductor%20thin%5Cu2010films%20is%20complex%20and%20evokes%20unexplained%20variance%20in%20quality%2C%20posing%20a%20major%20hurdle%20for%20the%20commercialization%20of%20perovskite%20photovoltaics.%20Advances%20in%20scalable%20fabrication%20processes%20are%20currently%20limited%20to%20gradual%20and%20arbitrary%20trial%5Cu2010and%5Cu2010error%20procedures.%20While%20the%20in%5Cu2010situ%20acquisition%20of%20photoluminescence%20videos%20has%20the%20potential%20to%20reveal%20important%20variations%20in%20the%20thin%5Cu2010film%20formation%20process%2C%20the%20high%20dimensionality%20of%20the%20data%20quickly%20surpasses%20the%20limits%20of%20human%20analysis.%20In%20response%2C%20this%20study%20leverages%20deep%20learning%20and%20explainable%20artificial%20intelligence%20%28XAI%29%20to%20discover%20relationships%20between%20sensor%20information%20acquired%20during%20the%20perovskite%20thin%5Cu2010film%20formation%20process%20and%20the%20resulting%20solar%20cell%20performance%20indicators%2C%20while%20rendering%20these%20relationships%20humanly%20understandable.%20We%5Cu00a0further%20show%20how%20gained%20insights%20can%20be%20distilled%20into%20actionable%20recommendations%20for%20perovskite%20thin%5Cu2010film%20processing%2C%20advancing%20towards%20industrial%5Cu2010scale%20solar%20cell%20manufacturing.%20Our%5Cu00a0study%20demonstrates%20that%20XAI%20methods%20will%20play%20a%20critical%20role%20in%20accelerating%20energy%20materials%20science.%5Cn%20%20%20%20%20%20%20%20%20%20%20%20This%20article%20is%20protected%20by%20copyright.%20All%20rights%20reserved%22%2C%22date%22%3A%222023-10-30%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Fadma.202307160%22%2C%22ISSN%22%3A%220935-9648%2C%201521-4095%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1002%5C%2Fadma.202307160%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T11%3A44%3A15Z%22%7D%7D%2C%7B%22key%22%3A%229DAJ6IIR%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A9737705%2C%22username%22%3A%22Niebelsa%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fniebelsa%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Brugnara%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-15%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBrugnara%2C%20G.%2C%20Baumgartner%2C%20M.%2C%20Scholze%2C%20E.%20D.%2C%20Deike-Hofmann%2C%20K.%2C%20Kades%2C%20K.%2C%20Scherer%2C%20J.%2C%20Denner%2C%20S.%2C%20Meredig%2C%20H.%2C%20Rastogi%2C%20A.%2C%20Mahmutoglu%2C%20M.%20A.%2C%20Ulfert%2C%20C.%2C%20Neuberger%2C%20U.%2C%20Sch%26%23xF6%3Bnenberger%2C%20S.%2C%20Schlamp%2C%20K.%2C%20Bendella%2C%20Z.%2C%20Pinetz%2C%20T.%2C%20Schmeel%2C%20C.%2C%20Wick%2C%20W.%2C%20Ringleb%2C%20P.%20A.%2C%20%26%23x2026%3B%20Vollmuth%2C%20P.%20%282023%29.%20%26lt%3Bb%26gt%3BDeep-learning%20based%20detection%20of%20vessel%20occlusions%20on%20CT-angiography%20in%20patients%20with%20suspected%20acute%20ischemic%20stroke%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BNature%20Communications%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B14%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%204938.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41467-023-40564-8%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41467-023-40564-8%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep-learning%20based%20detection%20of%20vessel%20occlusions%20on%20CT-angiography%20in%20patients%20with%20suspected%20acute%20ischemic%20stroke%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gianluca%22%2C%22lastName%22%3A%22Brugnara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%20David%22%2C%22lastName%22%3A%22Scholze%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Katerina%22%2C%22lastName%22%3A%22Deike-Hofmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Kades%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonas%22%2C%22lastName%22%3A%22Scherer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Denner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hagen%22%2C%22lastName%22%3A%22Meredig%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aditya%22%2C%22lastName%22%3A%22Rastogi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%20Ahmed%22%2C%22lastName%22%3A%22Mahmutoglu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christian%22%2C%22lastName%22%3A%22Ulfert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ulf%22%2C%22lastName%22%3A%22Neuberger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silvia%22%2C%22lastName%22%3A%22Sch%5Cu00f6nenberger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai%22%2C%22lastName%22%3A%22Schlamp%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zeynep%22%2C%22lastName%22%3A%22Bendella%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Pinetz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%22%2C%22lastName%22%3A%22Schmeel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wolfgang%22%2C%22lastName%22%3A%22Wick%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%20A.%22%2C%22lastName%22%3A%22Ringleb%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ralf%22%2C%22lastName%22%3A%22Floca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Markus%22%2C%22lastName%22%3A%22M%5Cu00f6hlenbruch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Radbruch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Bendszus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Vollmuth%22%7D%5D%2C%22abstractNote%22%3A%22Abstract%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20Swift%20diagnosis%20and%20treatment%20play%20a%20decisive%20role%20in%20the%20clinical%20outcome%20of%20patients%20with%20acute%20ischemic%20stroke%20%28AIS%29%2C%20and%20computer-aided%20diagnosis%20%28CAD%29%20systems%20can%20accelerate%20the%20underlying%20diagnostic%20processes.%20Here%2C%20we%20developed%20an%20artificial%20neural%20network%20%28ANN%29%20which%20allows%20automated%20detection%20of%20abnormal%20vessel%20findings%20without%20any%20a-priori%20restrictions%20and%20in%20%26lt%3B2%5Cu2009minutes.%20Pseudo-prospective%20external%20validation%20was%20performed%20in%20consecutive%20patients%20with%20suspected%20AIS%20from%204%20different%20hospitals%20during%20a%206-month%20timeframe%20and%20demonstrated%20high%20sensitivity%20%28%5Cu226587%25%29%20and%20negative%20predictive%20value%20%28%5Cu226593%25%29.%20Benchmarking%20against%20two%20CE-%20and%20FDA-approved%20software%20solutions%20showed%20significantly%20higher%20performance%20for%20our%20ANN%20with%20improvements%20of%2025%5Cu201345%25%20for%20sensitivity%20and%204%5Cu201311%25%20for%20NPV%20%28%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20p%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Cu2009%5Cu2264%5Cu20090.003%20each%29.%20We%20provide%20an%20imaging%20platform%20%28%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20https%3A%5C%2F%5C%2Fstroke.neuroAI-HD.org%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%29%20for%20online%20processing%20of%20medical%20imaging%20data%20with%20the%20developed%20ANN%2C%20including%20provisions%20for%20data%20crowdsourcing%2C%20which%20will%20allow%20continuous%20refinements%20and%20serve%20as%20a%20blueprint%20to%20build%20robust%20and%20generalizable%20AI%20algorithms.%22%2C%22date%22%3A%222023-08-15%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41467-023-40564-8%22%2C%22ISSN%22%3A%222041-1723%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41467-023-40564-8%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-04T13%3A54%3A43Z%22%7D%7D%2C%7B%22key%22%3A%22GC58WGKK%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gutsche%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGutsche%2C%20R.%2C%20Lowis%2C%20C.%2C%20Ziemons%2C%20K.%2C%20Kocher%2C%20M.%2C%20Ceccon%2C%20G.%2C%20Brambilla%2C%20C.%20R.%2C%20Shah%2C%20N.%20J.%2C%20Langen%2C%20K.-J.%2C%20Galldiks%2C%20N.%2C%20Isensee%2C%20F.%2C%20%26amp%3B%20Lohmann%2C%20P.%20%282023%29.%20%26lt%3Bb%26gt%3BAutomated%20Brain%20Tumor%20Detection%20and%20Segmentation%20for%20Treatment%20Response%20Assessment%20Using%20Amino%20Acid%20PET%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BJournal%20of%20Nuclear%20Medicine%3A%20Official%20Publication%2C%20Society%20of%20Nuclear%20Medicine%26lt%3B%5C%2Fi%26gt%3B%2C%20jnumed.123.265725.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2967%5C%2Fjnumed.123.265725%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2967%5C%2Fjnumed.123.265725%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automated%20Brain%20Tumor%20Detection%20and%20Segmentation%20for%20Treatment%20Response%20Assessment%20Using%20Amino%20Acid%20PET%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robin%22%2C%22lastName%22%3A%22Gutsche%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carsten%22%2C%22lastName%22%3A%22Lowis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karl%22%2C%22lastName%22%3A%22Ziemons%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Kocher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Garry%22%2C%22lastName%22%3A%22Ceccon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e1udia%20R%5Cu00e9gio%22%2C%22lastName%22%3A%22Brambilla%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nadim%20J.%22%2C%22lastName%22%3A%22Shah%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karl-Josef%22%2C%22lastName%22%3A%22Langen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Norbert%22%2C%22lastName%22%3A%22Galldiks%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabian%22%2C%22lastName%22%3A%22Isensee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Lohmann%22%7D%5D%2C%22abstractNote%22%3A%22Evaluation%20of%20metabolic%20tumor%20volume%20%28MTV%29%20changes%20using%20amino%20acid%20PET%20has%20become%20an%20important%20tool%20for%20response%20assessment%20in%20brain%20tumor%20patients.%20MTV%20is%20usually%20determined%20by%20manual%20or%20semiautomatic%20delineation%2C%20which%20is%20laborious%20and%20may%20be%20prone%20to%20intra-%20and%20interobserver%20variability.%20The%20goal%20of%20our%20study%20was%20to%20develop%20a%20method%20for%20automated%20MTV%20segmentation%20and%20to%20evaluate%20its%20performance%20for%20response%20assessment%20in%20patients%20with%20gliomas.%20Methods%3A%20In%20total%2C%20699%20amino%20acid%20PET%20scans%20using%20the%20tracer%20O-%282-%5B18F%5Dfluoroethyl%29-l-tyrosine%20%2818F-FET%29%20from%20555%20brain%20tumor%20patients%20at%20initial%20diagnosis%20or%20during%20follow-up%20were%20retrospectively%20evaluated%20%28mainly%20glioma%20patients%2C%2076%25%29.%2018F-FET%20PET%20MTVs%20were%20segmented%20semiautomatically%20by%20experienced%20readers.%20An%20artificial%20neural%20network%20%28no%20new%20U-Net%29%20was%20configured%20on%20476%20scans%20from%20399%20patients%2C%20and%20the%20network%20performance%20was%20evaluated%20on%20a%20test%20dataset%20including%20223%20scans%20from%20156%20patients.%20Surface%20and%20volumetric%20Dice%20similarity%20coefficients%20%28DSCs%29%20were%20used%20to%20evaluate%20segmentation%20quality.%20Finally%2C%20the%20network%20was%20applied%20to%20a%20recently%20published%2018F-FET%20PET%20study%20on%20response%20assessment%20in%20glioblastoma%20patients%20treated%20with%20adjuvant%20temozolomide%20chemotherapy%20for%20a%20fully%20automated%20response%20assessment%20in%20comparison%20to%20an%20experienced%20physician.%20Results%3A%20In%20the%20test%20dataset%2C%2092%25%20of%20lesions%20with%20increased%20uptake%20%28n%20%3D%20189%29%20and%2085%25%20of%20lesions%20with%20iso-%20or%20hypometabolic%20uptake%20%28n%20%3D%2033%29%20were%20correctly%20identified%20%28F1%20score%2C%2092%25%29.%20Single%20lesions%20with%20a%20contiguous%20uptake%20had%20the%20highest%20DSC%2C%20followed%20by%20lesions%20with%20heterogeneous%2C%20noncontiguous%20uptake%20and%20multifocal%20lesions%20%28surface%20DSC%3A%200.96%2C%200.93%2C%20and%200.81%20respectively%3B%20volume%20DSC%3A%200.83%2C%200.77%2C%20and%200.67%2C%20respectively%29.%20Change%20in%20MTV%2C%20as%20detected%20by%20the%20automated%20segmentation%2C%20was%20a%20significant%20determinant%20of%20disease-free%20and%20overall%20survival%2C%20in%20agreement%20with%20the%20physician%26%23039%3Bs%20assessment.%20Conclusion%3A%20Our%20deep%20learning-based%2018F-FET%20PET%20segmentation%20allows%20reliable%2C%20robust%2C%20and%20fully%20automated%20evaluation%20of%20MTV%20in%20brain%20tumor%20patients%20and%20demonstrates%20clinical%20value%20for%20automated%20response%20assessment.%22%2C%22date%22%3A%222023-08-10%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%2210.2967%5C%2Fjnumed.123.265725%22%2C%22ISSN%22%3A%221535-5667%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T07%3A44%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22BPTW657G%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-10%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMa%2C%20J.%2C%20Zhang%2C%20Y.%2C%20Gu%2C%20S.%2C%20Ge%2C%20C.%2C%20Ma%2C%20S.%2C%20Young%2C%20A.%2C%20Zhu%2C%20C.%2C%20Meng%2C%20K.%2C%20Yang%2C%20X.%2C%20Huang%2C%20Z.%2C%20Zhang%2C%20F.%2C%20Liu%2C%20W.%2C%20Pan%2C%20Y.%2C%20Huang%2C%20S.%2C%20Wang%2C%20J.%2C%20Sun%2C%20M.%2C%20Xu%2C%20W.%2C%20Jia%2C%20D.%2C%20Choi%2C%20J.%20W.%2C%20%26%23x2026%3B%20Wang%2C%20B.%20%282023%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BUnleashing%20the%20Strengths%20of%20Unlabeled%20Data%20in%20Pan-cancer%20Abdominal%20Organ%20Quantification%3A%20the%20FLARE22%20Challenge%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2308.05862%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2308.05862%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2308.05862%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Unleashing%20the%20Strengths%20of%20Unlabeled%20Data%20in%20Pan-cancer%20Abdominal%20Organ%20Quantification%3A%20the%20FLARE22%20Challenge%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shihao%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adamo%22%2C%22lastName%22%3A%22Young%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kangkang%22%2C%22lastName%22%3A%22Meng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xin%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyan%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wentao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22YuanKe%22%2C%22lastName%22%3A%22Pan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shoujin%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiacheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mingze%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weixin%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dengqiang%22%2C%22lastName%22%3A%22Jia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jae%20Won%22%2C%22lastName%22%3A%22Choi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nat%5Cu00e1lia%22%2C%22lastName%22%3A%22Alves%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bram%22%2C%22lastName%22%3A%22de%20Wilde%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregor%22%2C%22lastName%22%3A%22Koehler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yajun%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Manuel%22%2C%22lastName%22%3A%22Wiesenfarth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qiongjie%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoqiang%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22the%20FLARE%20Challenge%22%2C%22lastName%22%3A%22Consortium%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Quantitative%20organ%20assessment%20is%20an%20essential%20step%20in%20automated%20abdominal%20disease%20diagnosis%20and%20treatment%20planning.%20Artificial%20intelligence%20%28AI%29%20has%20shown%20great%20potential%20to%20automatize%20this%20process.%20However%2C%20most%20existing%20AI%20algorithms%20rely%20on%20many%20expert%20annotations%20and%20lack%20a%20comprehensive%20evaluation%20of%20accuracy%20and%20efficiency%20in%20real-world%20multinational%20settings.%20To%20overcome%20these%20limitations%2C%20we%20organized%20the%20FLARE%202022%20Challenge%2C%20the%20largest%20abdominal%20organ%20analysis%20challenge%20to%20date%2C%20to%20benchmark%20fast%2C%20low-resource%2C%20accurate%2C%20annotation-efficient%2C%20and%20generalized%20AI%20algorithms.%20We%20constructed%20an%20intercontinental%20and%20multinational%20dataset%20from%20more%20than%2050%20medical%20groups%2C%20including%20Computed%20Tomography%20%28CT%29%20scans%20with%20different%20races%2C%20diseases%2C%20phases%2C%20and%20manufacturers.%20We%20independently%20validated%20that%20a%20set%20of%20AI%20algorithms%20achieved%20a%20median%20Dice%20Similarity%20Coefficient%20%28DSC%29%20of%2090.0%5C%5C%25%20by%20using%2050%20labeled%20scans%20and%202000%20unlabeled%20scans%2C%20which%20can%20significantly%20reduce%20annotation%20requirements.%20The%20best-performing%20algorithms%20successfully%20generalized%20to%20holdout%20external%20validation%20sets%2C%20achieving%20a%20median%20DSC%20of%2089.5%5C%5C%25%2C%2090.9%5C%5C%25%2C%20and%2088.3%5C%5C%25%20on%20North%20American%2C%20European%2C%20and%20Asian%20cohorts%2C%20respectively.%20They%20also%20enabled%20automatic%20extraction%20of%20key%20organ%20biology%20features%2C%20which%20was%20labor-intensive%20with%20traditional%20manual%20measurements.%20This%20opens%20the%20potential%20to%20use%20unlabeled%20data%20to%20boost%20performance%20and%20alleviate%20annotation%20shortages%20for%20modern%20AI%20models.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2308.05862%22%2C%22date%22%3A%222023-08-10%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2308.05862%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2308.05862%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-02-15T10%3A25%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22H2HZN8D8%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A9737705%2C%22username%22%3A%22Niebelsa%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fniebelsa%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Bounias%20et%20al.%22%2C%22parsedDate%22%3A%222023-07-19%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBounias%2C%20D.%2C%20Baumgartner%2C%20M.%2C%20Neher%2C%20P.%2C%20Kovacs%2C%20B.%2C%20Floca%2C%20R.%2C%20Jaeger%2C%20P.%20F.%2C%20Kapsner%2C%20L.%2C%20Eberle%2C%20J.%2C%20Hadler%2C%20D.%2C%20Laun%2C%20F.%2C%20Ohlmeyer%2C%20S.%2C%20Maier-Hein%2C%20K.%2C%20%26amp%3B%20Bickelhaupt%2C%20S.%20%282023%2C%20July%2019%29.%20%26lt%3Bb%26gt%3BRisk-adjusted%20Training%20and%20Evaluation%20for%20Medical%20Object%20Detection%20in%20Breast%20Cancer%20MRI%26lt%3B%5C%2Fb%26gt%3B.%20ICML%203rd%20Workshop%20on%20Interpretable%20Machine%20Learning%20in%20Healthcare%20%28IMLH%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWwceaG9wOU%23all%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWwceaG9wOU%23all%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Risk-adjusted%20Training%20and%20Evaluation%20for%20Medical%20Object%20Detection%20in%20Breast%20Cancer%20MRI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dimitrios%22%2C%22lastName%22%3A%22Bounias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Baumgartner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Neher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Balint%22%2C%22lastName%22%3A%22Kovacs%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ralf%22%2C%22lastName%22%3A%22Floca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20F.%22%2C%22lastName%22%3A%22Jaeger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Kapsner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jessica%22%2C%22lastName%22%3A%22Eberle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dominique%22%2C%22lastName%22%3A%22Hadler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frederik%22%2C%22lastName%22%3A%22Laun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sabine%22%2C%22lastName%22%3A%22Ohlmeyer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus%22%2C%22lastName%22%3A%22Maier-Hein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Bickelhaupt%22%7D%5D%2C%22abstractNote%22%3A%22Medical%20object%20detection%20revolves%20around%20discovering%20and%20rating%20lesions%20and%20other%20objects%2C%20with%20the%20most%20common%20way%20of%20measuring%20performance%20being%20FROC%20%28Free-response%20Receiver%20Operating%20Characteristic%29%2C%20which%20calculates%20sensitivity%20at%20predefined%20thresholds%20of%20false%20positives%20per%20case.%20However%2C%20in%20a%20diagnosis%20or%20screening%20setting%20not%20all%20lesions%20are%20equally%20important%2C%20because%20small%20indeterminate%20lesions%20have%20limited%20clinical%20significance%2C%20while%20failing%20to%20detect%20and%20correctly%20classify%20high%20risk%20lesions%20can%20potentially%20hinder%20clinical%20prognosis%20and%20treatment%20options.%20It%20is%20therefore%20cardinal%20to%20correctly%20account%20for%20this%20risk%20imbalance%20in%20the%20way%20machine%20learning%20models%20are%20developed%20and%20evaluated.%20In%20this%20work%2C%20we%20propose%20risk-adjusted%20FROC%20%28raFROC%29%2C%20an%20adaptation%20of%20FROC%20that%20constitutes%20a%20first%20step%20on%20reflecting%20the%20underlying%20clinical%20need%20more%20accurately.%20Experiments%20on%20two%20different%20breast%20cancer%20datasets%20with%20a%20total%20of%201535%20lesions%20in%201735%20subjects%20showcase%20the%20clinical%20relevance%20of%20the%20proposed%20metric%20and%20its%20advantages%20over%20traditional%20evaluation%20methods.%20Additionally%2C%20by%20utilizing%20a%20risk-adjusted%20adaptation%20of%20focal%20loss%20%28raFocal%29%20we%20are%20able%20to%20improve%20the%20raFROC%20results%20and%20patient-level%20performance%20of%20nnDetection%2C%20a%20state-of-the-art%20medical%20object%20detection%20framework%2C%20at%20no%20expense%20of%20the%20regular%20FROC.%22%2C%22date%22%3A%222023%5C%2F07%5C%2F19%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22ICML%203rd%20Workshop%20on%20Interpretable%20Machine%20Learning%20in%20Healthcare%20%28IMLH%29%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWwceaG9wOU%23all%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-04T13%3A48%3A49Z%22%7D%7D%5D%7D
Godau, P., Kalinowski, P., Christodoulou, E., Reinke, A., Tizabi, M., Ferrer, L., Jäger, P., & Maier-Hein, L. (2025). Navigating prevalence shifts in image analysis algorithm deployment. Medical Image Analysis, 102, 103504. https://doi.org/10.1016/j.media.2025.103504
Fischer, M., Neher, P., Schüffler, P., Ziegler, S., Xiao, S., Peretzke, R., Clunie, D., Ulrich, C., Baumgartner, M., Muckenhuber, A., Almeida, S. D., Gőtz, M., Kleesiek, J., Nolden, M., Braren, R., & Maier-Hein, K. (2025). Unlocking the potential of digital pathology: Novel baselines for compression. Journal of Pathology Informatics, 100421. https://doi.org/10.1016/j.jpi.2025.100421
Bassi, P. R. A. S., Li, W., Tang, Y., Isensee, F., Wang, Z., Chen, J., Chou, Y.-C., Kirchhoff, Y., Rokuss, M., Huang, Z., Ye, J., He, J., Wald, T., Ulrich, C., Baumgartner, M., Roy, S., Maier-Hein, K. H., Jaeger, P., Ye, Y., … Zhou, Z. (2025). Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation? (arXiv:2411.03670). arXiv. https://doi.org/10.48550/arXiv.2411.03670
Klein, L., Lüth, C. T., Schlegel, U., Bungert, T. J., El-Assady, M., & Jäger, P. F. (2025). Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics (arXiv:2409.16756). arXiv. https://doi.org/10.48550/arXiv.2409.16756
Adler, T. J., Nölke, J.-H., Reinke, A., Tizabi, M. D., Gruber, S., Trofimova, D., Ardizzone, L., Jaeger, P. F., Buettner, F., Köthe, U., & Maier-Hein, L. (2025). Application-driven validation of posteriors in inverse problems. Medical Image Analysis, 101, 103474. https://doi.org/10.1016/j.media.2025.103474
Zimmerer, D., & Maier-Hein, K. (2025). Beyond Heatmaps: A Comparative Analysis of Metrics for Anomaly Localization in Medical Images. In C. H. Sudre, R. Mehta, C. Ouyang, C. Qin, M. Rakic, & W. M. Wells (Eds.), Uncertainty for Safe Utilization of Machine Learning in Medical Imaging (pp. 138–148). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-73158-7_13
Wald, T., Ulrich, C., Suprijadi, J., Nohel, M., Peretzke, R., & Maier-Hein, K. H. (2024). An OpenMind for 3D medical vision self-supervised learning (arXiv:2412.17041). arXiv. https://doi.org/10.48550/arXiv.2412.17041
Wald, T., Ulrich, C., Lukyanenko, S., Goncharov, A., Paderno, A., Maerkisch, L., Jäger, P. F., & Maier-Hein, K. (2024). Revisiting MAE pre-training for 3D medical image segmentation (arXiv:2410.23132). arXiv. https://doi.org/10.48550/arXiv.2410.23132
Ulrich, C., Wald, T., Tempus, E., Rokuss, M., Jaeger, P. F., & Maier-Hein, K. (2024). RadioActive: 3D Radiological Interactive Segmentation Benchmark (arXiv:2411.07885). arXiv. https://doi.org/10.48550/arXiv.2411.07885
Kahl, K.-C., Erkan, S., Traub, J., Lüth, C. T., Maier-Hein, K., Maier-Hein, L., & Jaeger, P. F. (2024). SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks (arXiv:2411.19688). arXiv. https://doi.org/10.48550/arXiv.2411.19688
Cimini, B. A., Bankhead, P., D’Antuono, R., Fazeli, E., Fernandez-Rodriguez, J., Fuster-Barceló, C., Haase, R., Jambor, H. K., Jones, M. L., Jug, F., Klemm, A. H., Kreshuk, A., Marcotti, S., Martins, G. G., McArdle, S., Miura, K., Muñoz-Barrutia, A., Murphy, L. C., Nelson, M. S., … Eliceiri, K. W. (2024). The crucial role of bioimage analysts in scientific research and publication. Journal of Cell Science, 137(20), jcs262322. https://doi.org/10.1242/jcs.262322
Klein, L., Ziegler, S., Gerst, F., Morgenroth, Y., Gotkowski, K., Schöniger, E., Heni, M., Kipke, N., Friedland, D., Seiler, A., Geibelt, E., Yamazaki, H., Häring, H. U., Wagner, S., Nadalin, S., Königsrainer, A., Mihaljević, A. L., Hartmann, D., Fend, F., … Wagner, R. (2024). Explainable AI-based analysis of human pancreas sections identifies traits of type 2 diabetes. medRxiv. https://doi.org/10.1101/2024.10.23.24315937
Klein, L., Amara, K., Lüth, C. T., Strobelt, H., El-Assady, M., & Jaeger, P. F. (2024, October 12). Interactive Semantic Interventions for VLMs: A Human-in-the-Loop Investigation of VLM Failure. Neurips Safe Generative AI Workshop 2024. https://openreview.net/forum?id=3kMucCYhYN
Mahmutoglu, M. A., Rastogi, A., Schell, M., Foltyn-Dumitru, M., Baumgartner, M., Maier-Hein, K. H., Deike-Hofmann, K., Radbruch, A., Bendszus, M., Brugnara, G., & Vollmuth, P. (2024). Deep learning-based defacing tool for CT angiography: CTA-DEFACE. European Radiology Experimental, 8(1), 111. https://doi.org/10.1186/s41747-024-00510-9
Amara, K., Klein, L., Lüth, C., Jäger, P., Strobelt, H., & El-Assady, M. (2024). Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities (arXiv:2410.01690). arXiv. https://doi.org/10.48550/arXiv.2410.01690
Christodoulou, E., Reinke, A., Houhou, R., Kalinowski, P., Erkan, S., Sudre, C. H., Burgos, N., Boutaj, S., Loizillon, S., Solal, M., Rieke, N., Cheplygina, V., Antonelli, M., Mayer, L. D., Tizabi, M. D., Cardoso, M. J., Simpson, A., Jäger, P. F., Kopp-Schneider, A., … Maier-Hein, L. (2024). Confidence intervals uncovered: Are we ready for real-world medical imaging AI? (arXiv:2409.17763). arXiv. https://doi.org/10.48550/arXiv.2409.17763
Denner, S., Bujotzek, M., Bounias, D., Zimmerer, D., Stock, R., Jäger, P. F., & Maier-Hein, K. (2024). Visual Prompt Engineering for Medical Vision Language Models in Radiology (arXiv:2408.15802). arXiv. https://doi.org/10.48550/arXiv.2408.15802
Dorent, R., Khajavi, R., Idris, T., Ziegler, E., Somarouthu, B., Jacene, H., LaCasce, A., Deissler, J., Ehrhardt, J., Engelson, S., Fischer, S. M., Gu, Y., Handels, H., Kasai, S., Kondo, S., Maier-Hein, K., Schnabel, J. A., Wang, G., Wang, L., … Kapur, T. (2024). LNQ 2023 challenge: Benchmark of weakly-supervised techniques for mediastinal lymph node quantification (arXiv:2408.10069). arXiv. https://doi.org/10.48550/arXiv.2408.10069
D. Almeida, S., Norajitra, T., Lüth, C. T., Wald, T., Weru, V., Nolden, M., Jäger, P. F., von Stackelberg, O., Heußel, C. P., Weinheimer, O., Biederer, J., Kauczor, H.-U., & Maier-Hein, K. (2024). How do deep-learning models generalize across populations? Cross-ethnicity generalization of COPD detection. Insights into Imaging, 15(1), 198. https://doi.org/10.1186/s13244-024-01781-x
Klabunde, M., Wald, T., Schumacher, T., Maier-Hein, K., Strohmaier, M., & Lemmerich, F. (2024). ReSi: A Comprehensive Benchmark for Representational Similarity Measures (arXiv:2408.00531). arXiv. https://doi.org/10.48550/arXiv.2408.00531
Ozkan, S., Selver, M. A., Baydar, B., Kavur, A. E., Candemir, C., & Akar, G. B. (2024). Cross-Modal Learning via Adversarial Loss and Covariate Shift for Enhanced Liver Segmentation. IEEE Transactions on Emerging Topics in Computational Intelligence, 8(4), 2723–2735. https://doi.org/10.1109/TETCI.2024.3369868
Rädsch, T., Reinke, A., Weru, V., Tizabi, M. D., Heller, N., Isensee, F., Kopp-Schneider, A., & Maier-Hein, L. (2024). Quality Assured: Rethinking Annotation Strategies in Imaging AI (arXiv:2407.17596). arXiv. https://doi.org/10.48550/arXiv.2407.17596
Isensee, F., Wald, T., Ulrich, C., Baumgartner, M., Roy, S., Maier-Hein, K., & Jaeger, P. F. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation (arXiv:2404.09556). arXiv. https://doi.org/10.48550/arXiv.2404.09556
Almeida, S. D., Norajitra, T., Lüth, C. T., Wald, T., Weru, V., Nolden, M., Jäger, P. F., von Stackelberg, O., Heußel, C. P., Weinheimer, O., Biederer, J., Kauczor, H.-U., & Maier-Hein, K. (2024). Prediction of disease severity in COPD: a deep learning approach for anomaly-based quantitative assessment of chest CT. European Radiology, 34(7), 4379–4392. https://doi.org/10.1007/s00330-023-10540-3
Fischer, M., Neher, P., Wald, T., Almeida, S. D., Xiao, S., Schüffler, P., Braren, R., Götz, M., Muckenhuber, A., Kleesiek, J., Nolden, M., & Maier-Hein, K. (2024). Learned Image Compression for HE-stained Histopathological Images via Stain Deconvolution (arXiv:2406.12623). arXiv. https://doi.org/10.48550/arXiv.2406.12623
Floca, R., Bohn, J., Haux, C., Wiestler, B., Zöllner, F. G., Reinke, A., Weiß, J., Nolden, M., Albert, S., Persigehl, T., Norajitra, T., Baeßler, B., Dewey, M., Braren, R., Büchert, M., Fallenberg, E. M., Galldiks, N., Gerken, A., Götz, M., … Bamberg, F. (2024). Radiomics workflow definition & challenges - German priority program 2177 consensus statement on clinically applied radiomics. Insights into Imaging, 15(1), 124. https://doi.org/10.1186/s13244-024-01704-w
Mais, L., Hirsch, P., Managan, C., Kandarpa, R., Rumberger, J. L., Reinke, A., Maier-Hein, L., Ihrke, G., & Kainmueller, D. (2024). FISBe: A Real-World Benchmark Dataset for Instance Segmentation of Long-Range thin Filamentous Structures. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22249–22259. https://doi.org/10.1109/CVPR52733.2024.02100
Yang, L., Kumar, P., Liu, Q., Chen, P., Li, C., Lüth, C., Krämer, L., Gabriel, C., Jeridi, A., Piraud, M., Stoeger, T., Staab-Weijnitz, C. a., Jäger, P., Rehberg, M., Isensee, F., & Schmid, O. (2024). Fresh Perspectives on Lung Morphometry and Pulmonary Drug Delivery: AI-powered 3D Imaging in Healthy and Fibrotic Murine Lungs. In B30. SCARRED FOR LIFE: TRANSLATIONAL RESEARCH IN INTERSTITIAL ABNORMALITIES AND LUNG FIBROSIS (Vol. 1–299, pp. A3242–A3242). American Thoracic Society. https://doi.org/10.1164/ajrccm-conference.2024.209.1_MeetingAbstracts.A3242
Zimmerer, D., & Maier-Hein, K. (2024, April 27). Revisiting Anomaly Localization Metrics. Medical Imaging with Deep Learning. https://openreview.net/forum?id=wGEDqSex3q¬eId=wGEDqSex3q
Denner, S., Zimmerer, D., Bounias, D., Bujotzek, M., Xiao, S., Kausch, L., Schader, P., Penzkofer, T., Jäger, P. F., & Maier-Hein, K. (2024). Leveraging Foundation Models for Content-Based Medical Image Retrieval in Radiology (arXiv:2403.06567). arXiv. https://doi.org/10.48550/arXiv.2403.06567
Gotkowski, K., Lüth, C., Jäger, P. F., Ziegler, S., Krämer, L., Denner, S., Xiao, S., Disch, N., Maier-Hein, K. H., & Isensee, F. (2024). Embarrassingly Simple Scribble Supervision for 3D Medical Segmentation (arXiv:2403.12834). arXiv. https://doi.org/10.48550/arXiv.2403.12834
Reinke, A., Tizabi, M. D., Baumgartner, M., Eisenmann, M., Heckmann-Nötzel, D., Kavur, A. E., Rädsch, T., Sudre, C. H., Acion, L., Antonelli, M., Arbel, T., Bakas, S., Benis, A., Buettner, F., Cardoso, M. J., Cheplygina, V., Chen, J., Christodoulou, E., Cimini, B. A., … Maier-Hein, L. (2024). Understanding metric-related pitfalls in image analysis validation. Nature Methods, 1–13. https://doi.org/10.1038/s41592-023-02150-0
Marinov, Z., Jäger, P. F., Egger, J., Kleesiek, J., & Stiefelhagen, R. (2024). Deep Interactive Segmentation of Medical Images: A Systematic Review and Taxonomy (arXiv:2311.13964). arXiv. https://doi.org/10.48550/arXiv.2311.13964
Lamm, L., Zufferey, S., Righetto, R. D., Wietrzynski, W., Yamauchi, K. A., Burt, A., Liu, Y., Zhang, H., Martinez-Sanchez, A., Ziegler, S., Isensee, F., Schnabel, J. A., Engel, B. D., & Peng, T. (2024). MemBrain v2: an end-to-end tool for the analysis of membranes in cryo-electron tomography. bioRxiv. https://doi.org/10.1101/2024.01.05.574336
Gotkowski, K., Gupta, S., Godinho, J. R. A., Tochtrop, C. G. S., Maier-Hein, K. H., & Isensee, F. (2024). ParticleSeg3D: A scalable out-of-the-box deep learning segmentation solution for individual particle characterization from micro CT images in mineral processing and recycling. Powder Technology, 434, 119286. https://doi.org/10.1016/j.powtec.2023.119286
Bounias, D., Baumgartner, M., Neher, P., Kovacs, B., Floca, R., Jaeger, P. F., Kapsner, L. A., Eberle, J., Hadler, D., Laun, F., Ohlmeyer, S., Maier-Hein, K. H., & Bickelhaupt, S. (2024). Abstract: Object Detection for Breast Diffusion-weighted Imaging. In A. Maier, T. M. Deserno, H. Handels, K. Maier-Hein, C. Palm, & T. Tolxdorff (Eds.), Bildverarbeitung für die Medizin 2024 (pp. 334–334). Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-44037-4_84
Bounias, D., Baumgartner, M., Neher, P., Kovacs, B., Floca, R., Jaeger, P. F., Kapsner, L. A., Eberle, J., Hadler, D., Laun, F., Ohlmeyer, S., Maier-Hein, K. H., & Bickelhaupt, S. (2024). Abstract: Object Detection for Breast Diffusion-weighted Imaging. In A. Maier, T. M. Deserno, H. Handels, K. Maier-Hein, C. Palm, & T. Tolxdorff (Eds.), Bildverarbeitung für die Medizin 2024 (pp. 334–334). Springer Fachmedien. https://doi.org/10.1007/978-3-658-44037-4_84
Almeida, S. D., Norajitra, T., Lüth, C. T., Wald, T., Weru, V., Nolden, M., Jäger, P. F., von Stackelberg, O., Heußel, C. P., Weinheimer, O., Biederer, J., Kauczor, H.-U., & Maier-Hein, K. (2024). Capturing COPD heterogeneity: anomaly detection and parametric response mapping comparison for phenotyping on chest computed tomography. Frontiers in Medicine, 11, 1360706. https://doi.org/10.3389/fmed.2024.1360706
Roy, S., Koehler, G., Baumgartner, M., Ulrich, C., Isensee, F., Jaeger, P. F., & Maier-Hein, K. (2024). Abstract: 3D Medical Image Segmentation with Transformer-based Scaling of ConvNets. In A. Maier, T. M. Deserno, H. Handels, K. Maier-Hein, C. Palm, & T. Tolxdorff (Eds.), Bildverarbeitung für die Medizin 2024 (pp. 79–79). Springer Fachmedien. https://doi.org/10.1007/978-3-658-44037-4_23
Traub, J., Bungert, T. J., Lüth, C. T., Baumgartner, M., Maier-Hein, K. H., Maier-Hein, L., & Jaeger, P. F. (2024). Overcoming Common Flaws in the Evaluation of Selective Classification Systems. arXiv. https://doi.org/10.48550/ARXIV.2407.01032
Koehler, G., Wald, T., Ulrich, C., Zimmerer, D., Jaeger, P. F., Franke, J. K., Kohl, S., Isensee, F., & Maier-Hein, K. H. (2024). RecycleNet: Latent Feature Recycling Leads to Iterative Decision Refinement. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 810–818. https://openaccess.thecvf.com/content/WACV2024/html/Kohler_RecycleNet_Latent_Feature_Recycling_Leads_to_Iterative_Decision_Refinement_WACV_2024_paper.html
Kahl, K.-C., Lüth, C. T., Zenk, M., Maier-Hein, K., & Jaeger, P. F. (2024). ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation. https://doi.org/10.48550/ARXIV.2401.08501
Li, J., Zhou, Z., Yang, J., Pepe, A., Gsaxner, C., Luijten, G., Qu, C., Zhang, T., Chen, X., Li, W., Wodzinski, M., Friedrich, P., Xie, K., Jin, Y., Ambigapathy, N., Nasca, E., Solak, N., Melito, G. M., Vu, V. D., … Egger, J. (2023). MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision (arXiv:2308.16139). arXiv. https://doi.org/10.48550/arXiv.2308.16139
Lüth, C. T., Bungert, T. J., Klein, L., & Jaeger, P. F. (2023). Navigating the Pitfalls of Active Learning Evaluation: A Systematic Framework for Meaningful Performance Assessment (arXiv:2301.10625). arXiv. https://doi.org/10.48550/arXiv.2301.10625
Brandenburg, J. M., Jenke, A. C., Stern, A., Daum, M. T. J., Schulze, A., Younis, R., Petrynowski, P., Davitashvili, T., Vanat, V., Bhasker, N., Schneider, S., Mündermann, L., Reinke, A., Kolbinger, F. R., Jörns, V., Fritz-Kebede, F., Dugas, M., Maier-Hein, L., Klotz, R., … Wagner, M. (2023). Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: a prospective annotation study. Surgical Endoscopy, 37(11), 8577–8593. https://doi.org/10.1007/s00464-023-10447-6
Klein, L., Ziegler, S., Laufer, F., Debus, C., Götz, M., Maier‐Hein, K., Paetzold, U. W., Isensee, F., & Jäger, P. F. (2023). Discovering Process Dynamics for Scalable Perovskite Solar Cell Manufacturing with Explainable AI. Advanced Materials, 2307160. https://doi.org/10.1002/adma.202307160
Brugnara, G., Baumgartner, M., Scholze, E. D., Deike-Hofmann, K., Kades, K., Scherer, J., Denner, S., Meredig, H., Rastogi, A., Mahmutoglu, M. A., Ulfert, C., Neuberger, U., Schönenberger, S., Schlamp, K., Bendella, Z., Pinetz, T., Schmeel, C., Wick, W., Ringleb, P. A., … Vollmuth, P. (2023). Deep-learning based detection of vessel occlusions on CT-angiography in patients with suspected acute ischemic stroke. Nature Communications, 14(1), 4938. https://doi.org/10.1038/s41467-023-40564-8
Gutsche, R., Lowis, C., Ziemons, K., Kocher, M., Ceccon, G., Brambilla, C. R., Shah, N. J., Langen, K.-J., Galldiks, N., Isensee, F., & Lohmann, P. (2023). Automated Brain Tumor Detection and Segmentation for Treatment Response Assessment Using Amino Acid PET. Journal of Nuclear Medicine: Official Publication, Society of Nuclear Medicine, jnumed.123.265725. https://doi.org/10.2967/jnumed.123.265725
Ma, J., Zhang, Y., Gu, S., Ge, C., Ma, S., Young, A., Zhu, C., Meng, K., Yang, X., Huang, Z., Zhang, F., Liu, W., Pan, Y., Huang, S., Wang, J., Sun, M., Xu, W., Jia, D., Choi, J. W., … Wang, B. (2023). Unleashing the Strengths of Unlabeled Data in Pan-cancer Abdominal Organ Quantification: the FLARE22 Challenge (arXiv:2308.05862). arXiv. https://doi.org/10.48550/arXiv.2308.05862
Bounias, D., Baumgartner, M., Neher, P., Kovacs, B., Floca, R., Jaeger, P. F., Kapsner, L., Eberle, J., Hadler, D., Laun, F., Ohlmeyer, S., Maier-Hein, K., & Bickelhaupt, S. (2023, July 19). Risk-adjusted Training and Evaluation for Medical Object Detection in Breast Cancer MRI. ICML 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH). https://openreview.net/forum?id=WwceaG9wOU#all

Other Researches


Projects

Helmholtz Imaging Projects are granted to cross-disciplinary research teams that identify innovative research topics at the intersection of imaging and information & data science, initiate cross-cutting research collaborations, and thus underpin the growth of the Helmholtz Imaging network. These annual calls are based on the general concept for Helmholtz Imaging and are in line with the future topics of the Initiative and Networking Fund (INF).

Research Unit DESY

The Research Unit at DESY focuses on the early stages of the imaging pipeline, developing  methods for advanced image reconstruction, including the optimization of measurements and the combination of classical methods with data-driven approaches.

Our goal is to drag out a maximal amount of (quantitative) information from given or designed measurements.

Research Unit MDC

The Research Unit at MDC focuses on integrating heterogeneous imaging data across modalities, scales, and time. We develop concepts and algorithms for generic processing, stitching, fusion, and visualization of large, high-dimensional datasets.

Our aim is to enable seamless analysis of complex imaging data without restrictions on the underlying modalities.

Publications

Helmholtz Imaging captures the world of science. Discover unique data sets, ready-to-use software tools, and top-level research papers. The platform’s output originates from our research groups as well as from projects funded by us, theses supervised by us and collaborations initiated through us. Altogether, this showcases the whole diversity of Helmholtz Imaging.