End to end detection and segmentation of the nuclei in divergent images
- neovijayk
- Jul 6, 2020
- 2 min read
In the this article we will briefly take a look at the functioning of U net for the segmentation purpose. I have given important information about the model that I have implemented on the 30 input samples only. It is done for experiment purpose to see and learn how U-Net works. The details of the implementation are as follows:
Purpose
Find the nuclei in divergent images to advance medical discovery
Segmenting cell nuclei from the input image. Output will be segmented nuclei
Segmentation done using: U Net Model
Library used: Keras
Sample Input :
Input data taken from 2018 Data Science Bowl (https://www.kaggle.com/c/data-science-bowl-2018/data) Data.
data set contains a large number of segmented nuclei images (in .png format)
training set is: images and annotated masks

Output:
Output of the model will be the
test set images (images only, we are predicting the masks)
is one class i.e. Nucleus(segmented) and a background which is black
Accuracy: loss: 0.1084 – acc: 0.9569

Model information:

U-Net model
Broadly architecture can be divided into
Downsampling path: a contracting path (left side)
Bottleneck
Upsampling path: an expansive path (right side).
Total 32 layers (23 convolutional layers),
Trainable params: 1,962,625,
optimizer: “adam”,
loss or Error function: “binary_crossentropy”
Basic terminologies used in the model
Downsampling Path: The contracting path follows the typical architecture of a convolutional network. It consists of:
the repeated application of two 3×3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2×2 max pooling operation with stride 2 for downsampling.
At each downsampling step we double the number of feature channels.
Upsampling path:
Every step in the expansive path consists of an upsampling of the feature map followed by a 2×2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3×3 convolutions, each followed by a ReLU.
The cropping is necessary due to the loss of border pixels in every convolution.
Slip connection: The skip connection from the downsampling path are concatenated with feature map during upsampling path. These skip connections provide local information to global information while upsampling.
Final Layer:
At the final layer a 1×1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.
For the complete code implementation and data used please refer my Git Hub Repository. If you have any questions please feel free to ask below in the comment section. Also please like and subscribe to my Blog.
Kommentare