mlp
- class MLP(in_channels: int, hidden_channels: ~typing.List[int], norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = None, activation_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = <class 'torch.nn.modules.activation.ReLU'>, inplace: bool | None = None, bias: bool = True, dropout: float = 0.0, disable_norm_last_layer: bool = False, disable_activation_last_layer: bool = False, disable_dropout_last_layer: bool = False)[source]
This block implements the multi-layer perceptron (MLP) module.
Note
Adapted from
torchvision.ops.MLP, the only difference being the option to disable dropout in the last layer, and the fact that no dropout layers are added if the dropout probability is 0.0.- Parameters:
in_channels (
int) – Number of channels of the inputhidden_channels (
List[int]) – List of the hidden channel dimensionsnorm_layer (
Callable[...,torch.nn.Module], optional) – Norm layer that will be stacked on top of the linear layer. IfNonethis layer won’t be used. Default:Noneactivation_layer (
Callable[...,torch.nn.Module], optional) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. IfNonethis layer won’t be used. Default:torch.nn.ReLUinplace (
bool, optional) – Parameter for the activation layer, which can optionally do the operation in-place. Default isNone, which uses the respective default values of theactivation_layerand Dropout layer.bias (
bool) – Whether to use bias in the linear layer. DefaultTruedropout (
float) – The probability for the dropout layer. Default: 0.0dropout_last_layer (
bool) – Whether to use dropout in the last layer. Default:True
- __init__(in_channels: int, hidden_channels: ~typing.List[int], norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = None, activation_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] | None = <class 'torch.nn.modules.activation.ReLU'>, inplace: bool | None = None, bias: bool = True, dropout: float = 0.0, disable_norm_last_layer: bool = False, disable_activation_last_layer: bool = False, disable_dropout_last_layer: bool = False)[source]
Initializes the MLP module.
- forward(x: Tensor, batch: Tensor = None) Tensor[source]
Calculates the forward pass of the module.
- Parameters:
x (
torch.Tensor) – The input tensor.batch (
torch.Tensor) – Batch tensor.
- Returns:
The output tensor.
- Return type: