Perceiver IO
- class vformer.models.classification.perceiver_io.PerceiverIO(dim=32, depth=6, latent_dim=512, num_latents=512, num_cross_heads=1, num_latent_heads=8, cross_head_dim=64, latent_head_dim=64, queries_dim=32, logits_dim=None, decoder_ff=False)[source]
Bases:
Module
Implementation of ‘Perceiver IO: A General Architecture for Structured Inputs & Outputs’ https://arxiv.org/abs/2107.14795
Code Implementation based on: https://github.com/lucidrains/perceiver-pytorch
- Parameters
dim (int) – Size of sequence to be encoded
depth (int) – Depth of latent attention blocks
latent_dim (int) – Dimension of latent array
num_latents (int) – Number of latent arrays
num_cross_heads (int) – Number of heads for cross attention
num_latent_heads (int) – Number of heads for latent attention
cross_head_dim (int) – Dimension of cross attention head
latent_head_dim (int) – Dimension of latent attention head
queries_dim (int) – Dimension of queries array
logits_dim (int, optional) – Dimension of output logits
decoder_ff (bool) – Whether to include a feed forward layer for the decoder attention block
- forward(x, queries)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool