DenseNet | Densely Connected Convolutional Networks

Code With Aarohi
Code With Aarohi
23.2 هزار بار بازدید - 2 سال پیش - Densenet is an Image classification
Densenet is an Image classification Model. DenseNet overcome this vanishing gradient problem and provide us high accuracy compared to other Deep Convolutional neural Networks by simply connecting every layer directly with each other.



For queries: You can comment in comment section or you can mail at [email protected]    

Topics Covered:
• What is DenseNet?
• Architecture of DenseNet
• Advantages of DenseNet over other Image Classification Models




A DenseNet consists of dense blocks.  We have 4 Dense blocks. Each dense block consists of convolution layers .After a dense block a transition layer is added.
Dense Block:
Every layer in a dense block is directly connected to all its layers. Each layer receives the feature-maps from previous layers. Means The input of a layer inside DenseNet is the concatenation of feature maps from previous layers.
We cannot concatenate the featuremaps, if the size of feature maps is different. So, to be able to perform the concatenation operation, we need to make sure that the size of the feature maps that we are concatenating is the same.
But we can’t just keep the feature maps the same size throughout the network - an essential part of convolutional networks is down-sampling layers that change the size of feature maps.


Convolutional Layer:
Each convolution layer is consist of three consecutive operations: batch normalization (BN) , followed by a rectified linear unit (ReLU) and a 3 × 3 convolution (Conv). Also dropout can be added which depends on your architecture requirement.

Transition Layer:
1×1 Conv followed by 2×2 average pooling are used as the transition layers between two contiguous dense blocks.
Feature map sizes are the same within the dense block so that they can be concatenated together easily.



Advantages of DenseNets:

Strengthen feature propagation
a. ie. features learned by layer 1 are directly accessible by layer 4

2. Encourage feature reuse
a. ie. Layer 4 doesn't have to relearn a feature learnt by layer 1 because it can access that information directly via concatenation

3.Reduce number of parameters:
a. The filter size (number of convolutions each layer has to do to pass to the next one) is reduced in DenseNet compared to architectures without skip because to communicate the same amount of information, we now have to allow each layer to "talk" more to the very next layer than we otherwise would have. When information "skips" intermediate layers, that filter depth is no longer required so we don't have to keep track of as many convolutional parameters.
2 سال پیش در تاریخ 1401/04/01 منتشر شده است.
23,200 بـار بازدید شده
... بیشتر