Compact Vision Transformer

class vformer.models.classification.cvt.CVT(img_size=224, patch_size=4, in_channels=3, seq_pool=True, embedding_dim=768, head_dim=96, num_layers=1, num_heads=1, mlp_ratio=4.0, n_classes=1000, p_dropout=0.1, attn_dropout=0.1, drop_path=0.1, positional_embedding='learnable', decoder_config=(768, 1024))[source]

Implementation of Escaping the Big Data Paradigm with Compact Transformers: https://arxiv.org/abs/2104.05704

img_size: int

Size of the image, default is 224

patch_size:int

Size of the single patch in the image, default is 4

in_channels:int

Number of input channels in image, default is 3

seq_pool:bool

Whether to use sequence pooling, default is True

embedding_dim: int

Patch embedding dimension, default is 768

num_layers: int

Number of Encoders in encoder block, default is 1

num_heads: int

Number of heads in each transformer layer, default is 1

mlp_ratio:float

Ratio of mlp heads to embedding dimension, default is 4.0

n_classes: int

Number of classes for classification, default is 1000

p_dropout: float

Dropout probability, default is 0.0

attn_dropout: float

Dropout probability, defualt is 0.0

drop_path: float

Stochastic depth rate, default is 0.1

positional_embedding: str

One of the string values {‘learnable’,’sine’,’None’}, default is learnable

decoder_config: tuple(int) or int

Configuration of the decoder. If None, the default configuration is used.

forward(x)[source]
Parameters

x (torch.Tensor) – Input tensor

Returns

Returns tensor of size n_classes

Return type

torch.Tensor