Pyramid Encoder
- class vformer.encoder.pyramid.PVTEncoder(dim, num_heads, mlp_ratio, depth, qkv_bias, qk_scale, p_dropout, attn_dropout, drop_path, activation, use_dwconv, sr_ratio, linear=False, drop_path_mode='batch')[source]
- Parameters
dim (int) – Dimension of the input tensor
num_heads (int) – Number of attention heads
mlp_ratio – Ratio of MLP hidden dimension to embedding dimension
depth (int) – Number of attention layers in the encoder
qkv_bias (bool) – Whether to add a bias vector to the q,k, and v matrices
qk_scale (float, optional) – Override default qk scale of head_dim ** -0.5 in Spatial Attention if set
p_dropout (float) – Dropout probability
attn_dropout (float) – Dropout probability
drop_path (tuple(float)) – List of stochastic drop rate
activation (nn.Module) – Activation layer
use_dwconv (bool) – Whether to use depth-wise convolutions in overlap-patch embedding
sr_ratio (float) – Spatial Reduction ratio
linear (bool) – Whether to use linear Spatial attention, default is
`False`
.drop_path_mode (str) – Mode for StochasticDepth <https://pytorch.org/vision/main/generated/torchvision.ops.stochastic_depth.html>_ , must be one of {``batch` or
row
}
- forward(x, **kwargs)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class vformer.encoder.pyramid.PVTFeedForward(dim, hidden_dim=None, out_dim=None, activation=<class 'torch.nn.modules.activation.GELU'>, p_dropout=0.0, linear=False, use_dwconv=False, **kwargs)[source]
- Parameters
dim (int) – Dimension of the input tensor
hidden_dim (int, optional) – Dimension of hidden layer
out_dim (int, optional) – Dimension of output tensor
activation (nn.Module) – Activation Layer, default is nn.GELU
p_dropout (float) – Dropout probability/rate, default is 0.0
linear (bool) – Whether to use linear Spatial attention,default is False
use_dwconv (bool) – Whether to use Depth-wise convolutions, default is False
kernel_size_dwconv (int) – kernel_size parameter for 2D convolution used in Depth wise convolution
stride_dwconv (int) – stride parameter for 2D convolution used in Depth wise convolution
padding_dwconv (int) – padding parameter for 2D convolution used in Depth wise convolution
bias_dwconv (bool) – bias parameter for 2D convolution used in Depth wise convolution