Cancer Detection Convolutional Neural Networks

    Radiologists can take scans and help diagnose and treat patients. For this project, I had access to a dataset of scans of cell tissue, some cancerous, and many of it not. In about 85% of the samples, a Radiologist had determined no cancer was present. My goal was to use these images to train a Convolutional Neural Network to detect cancer clusters in each image – essentially creating a binary classifier for each pixel of the image.

Network Architecture

The U-Net

This model just served as a point of reference, as my images were sized to be 512×512, and I increased padding and negated stride so as at each convolution the general image size stayed the same. During training and testing however, I ended up resizing the input images to be slightly smaller at 400×400 so as to speed up training of the model.


Due to limited computing resources, I only trained my model on Google Colab for about 90 minutes. However with reduced input sizes (400×400) this still allowed for 5 epochs of training. Due to memory restrictions, batch sizes were required to be quite small. You’ll notice with Entropy Loss there are 2 prominent spikes, likely due to the large class imbalance causing overfitting on small batches. It’s important to recognize that if a model just guessed that each pixel was non-cancerous, we would see an accuracy of 85%. However we notice in our model that we get a lift of about 10% for a total accuracy of about 95%.

Final Prediction

When all was said and done, our trained model was saved via pickle. It’s input is a single 3x512x512 image, and it’s output will be a binary 512×512, indicating where cancerous cells were detected to be. Technically due to network architecture images could be of size (8n)x(8n) for any natural number n, however the data was trained on smaller images so inputting in larger sizes than what it was trained on would impact accuracy. All in all, it was a fun project that helped sharpen my PyTorch skills.