Description
Investigators manipulated images from the NYU Breast Cancer Screening Dataset to identify differences in the the features of perception used in diagnosis by radiologists versus deep neural networks (DNNs). Two studies were conducted. In the reader study, a set of 720 exams were processed with Gaussian low-pass filtering at varying severity levels and ten radiologists and five DNNs (trained on unperturbed data) provided binary predictions on whether a malignant lesion was present in each breast (yes or no). In the annotation reader study, a subset of 120 exams with malignant images were presented to seven radiologists for their annotation of up to three regions of interest (ROIs) containing suspicious features. Low-pass filtering was applied to the interior and exterior of ROIs and the entire image before the images were presented to DNNs (trained on unperturbed data). The resulting dataset contains radiologist and DNN reader predictions and radiologist annotations from both studies.
Geographic Coverage
New York (State) - New York City
Subject of Study
Subject Domain
Keywords

Access

Restrictions
Free to All
Instructions
Reader data and the code underlying probabilistic modeling described in the associated publication may be downloaded through Github for use under the terms of the GNU AGPLv3 license.
Access via Github

Data and code

Associated Publications
Software Used
Deep Multi-view
Globally-Aware Multiple Instance Classifier
PyStan
PyTorch
Study Type
Observational
Grant Support
HDR-1922658/NSF
9683/Gordon and Betty Moore Foundation