Deep Vision for Breast Cancer Classification and Segmentation




Fulton, Lawrence V.
McLeod, Alexander
Dolezel, Diane
Bastian, Nathaniel
Fulton, Christopher P.

Journal Title

Journal ISSN

Volume Title


Multidisciplinary Digital Publishing Institute


(1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography.



deep vision, breast cancer, machine learning, region of interest detection


Fulton, L., McLeod, A., Dolezel, D., Bastian, N., & Fulton, C. P. (2021). Deep vision for breast cancer classification and segmentation. Cancers, 13(21), 5384.


Rights Holder

Rights License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Rights URI