mlp

class MLP(in_channels: int, hidden_channels: ~typing.List[int], norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = None, activation_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = <class 'torch.nn.modules.activation.ReLU'>, inplace: bool | None = None, bias: bool = True, dropout: float = 0.0, disable_norm_last_layer: bool = False, disable_activation_last_layer: bool = False, disable_dropout_last_layer: bool = False)[source]

This block implements the multi-layer perceptron (MLP) module.

Note

Adapted from torchvision.ops.MLP, the only difference being the option to disable dropout in the last layer, and the fact that no dropout layers are added if the dropout probability is 0.0.

Parameters:
  • in_channels (int) – Number of channels of the input

  • hidden_channels (List[int]) – List of the hidden channel dimensions

  • norm_layer (Callable[..., torch.nn.Module], optional) – Norm layer that will be stacked on top of the linear layer. If None this layer won’t be used. Default: None

  • activation_layer (Callable[..., torch.nn.Module], optional) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. If None this layer won’t be used. Default: torch.nn.ReLU

  • inplace (bool, optional) – Parameter for the activation layer, which can optionally do the operation in-place. Default is None, which uses the respective default values of the activation_layer and Dropout layer.

  • bias (bool) – Whether to use bias in the linear layer. Default True

  • dropout (float) – The probability for the dropout layer. Default: 0.0

  • dropout_last_layer (bool) – Whether to use dropout in the last layer. Default: True

__init__(in_channels: int, hidden_channels: ~typing.List[int], norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = None, activation_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = <class 'torch.nn.modules.activation.ReLU'>, inplace: bool | None = None, bias: bool = True, dropout: float = 0.0, disable_norm_last_layer: bool = False, disable_activation_last_layer: bool = False, disable_dropout_last_layer: bool = False)[source]

Initializes the MLP module.

forward(x: Tensor, batch: Tensor = None) Tensor[source]

Calculates the forward pass of the module.

Parameters:
Returns:

The output tensor.

Return type:

torch.Tensor