Blocks

class vformer.common.blocks.DWConv(dim, kernel_size_dwconv=3, stride_dwconv=1, padding_dwconv=1, bias_dwconv=True)[source]

Depth Wise Convolution

Parameters
  • dim (int) – Dimension of the input tensor

  • kernel_size_dwconv (int,optional) – Size of the convolution kernel, default is 3

  • stride_dwconv (int) – Stride of the convolution, default is 1

  • padding_dwconv (int or tuple or str) – Padding added to all sides of the input, default is 1

  • bias_dwconv (bool) – Whether to add learnable bias to the output,default is True.

forward(x, H, W)[source]
Parameters
  • x (torch.Tensor) – Input tensor

  • H (int) – Height of image patch

  • W (int) – Width of image patch

Returns

Returns output tensor after performing depth-wise convolution operation

Return type

torch.Tensor