NeuroTorch

neurotorch.modules.layers package

Submodules

neurotorch.modules.layers.base module

class neurotorch.modules.layers.base.BaseLayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, device: device | None = None, **kwargs)

Bases: SizedModule

Base class for all layers.

Attributes:
  • input_size (Optional[Dimension]): The input size of the layer.

  • output_size (Optional[Dimension]): The output size of the layer.

  • name (str): The name of the layer.

  • kwargs (dict): Additional keyword arguments.

__call__(inputs: Tensor, state: Tensor | None = None, *args, **kwargs)

Call the forward method of the layer. If the layer is not built, it will be built automatically. In addition, if kwargs['regularize'] is set to True, the :meth: update_regularization_loss method will be called.

Parameters:
  • inputs (torch.Tensor) – The inputs to the layer.

  • args – The positional arguments to the forward method.

  • kwargs – The keyword arguments to the forward method.

Returns:

The output of the layer.

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, device: device | None = None, **kwargs)

Constructor of the BaseLayer class.

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer.

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • learning_type (LearningType) – The learning type of the layer. Deprecated use freeze_weights instead.

  • device (Optional[torch.device]) – The device of the layer. Defaults to the current available device.

  • kwargs – Additional keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • freeze_weights (bool) – Whether to freeze the weights of the layer. Defaults to False.

build() BaseLayer

Build the layer. This method must be call after the layer is initialized to make sure that the layer is ready to be used e.g. the input and output size is set, the weights are initialized, etc.

Returns:

The layer itself.

Return type:

BaseLayer

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]

Create an empty state for the layer. This method must be implemented by the child class.

Parameters:

batch_size (int) – The batch size of the state.

Returns:

The empty state.

Return type:

Tuple[torch.Tensor, …]

property device
forward(inputs: Tensor, state: Tensor | None = None, **kwargs) Tuple[Tensor, Tensor | None]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property freeze_weights: bool
get_and_reset_regularization_loss()

Get and reset the regularization loss for this layer. The regularization loss will be reset by the reset_regularization_loss method after it is returned.

WARNING: If this method is not called after an integration, the update of the regularization loss can cause a memory leak. TODO: fix this.

Returns:

The regularization loss.

get_regularization_loss() Tensor

Get the regularization loss for this layer.

Returns:

The regularization loss.

infer_sizes_from_inputs(inputs: Tensor)

Try to infer the input and output size of the layer from the inputs.

Parameters:

inputs (torch.Tensor) – The inputs to infer the size from.

Returns:

None

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

property is_built: bool
property is_ready_to_build: bool
property requires_grad
reset_regularization_loss()

Reset the regularization loss to zero.

Returns:

None

to(device: device, non_blocking: bool = True, *args, **kwargs)

Move all the parameters of the layer to the specified device.

Parameters:
  • device (torch.device) – The device to move the parameters to.

  • non_blocking (bool) – Whether to move the parameters in a non-blocking way.

  • args – Additional positional arguments.

  • kwargs – Additional keyword arguments.

Returns:

self

training: bool
update_regularization_loss(state: Any | None = None, *args, **kwargs) Tensor

Update the regularization loss for this layer. Each update call increments the regularization loss so at the end the regularization loss will be the sum of all calls to this function. This method is called at the end of each forward call automatically by the BaseLayer class.

Parameters:
  • state (Optional[Any]) – The current state of the layer.

  • args – Other positional arguments.

  • kwargs – Other keyword arguments.

Returns:

The updated regularization loss.

Return type:

torch.Tensor

class neurotorch.modules.layers.base.BaseNeuronsLayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: BaseLayer

A base class for layers that have neurons. This class provides two importants Parameters: the forward_weights and the recurrent_weights. Child classes must implement the :method:`forward` method and the :mth:`create_empty_state` method.

Attributes:
  • forward_weights (torch.nn.Parameter): The weights used to compute the output of the layer.

  • recurrent_weights (torch.nn.Parameter): The weights used to compute the hidden state of the layer.

  • dt (float): The time step of the layer.

  • use_rec_eye_mask (torch.Tensor): Whether to use the recurrent eye mask.

  • rec_mask (torch.Tensor): The recurrent eye mask.

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Initialize the layer.; See the BaseLayer class for more details.;

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer;

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use a recurrent connection. Default is True.

  • use_rec_eye_mask (bool) – Whether to use a recurrent eye mask. Default is False. This mask will be used to mask to zero the diagonal of the recurrent connection matrix.

  • learning_type (LearningType) – The learning type of the layer. Default is BPTT.

  • dt (float) – The time step of the layer. Default is 1e-3.

  • kwargs – Other keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • hh_init (str) – The initialization method for the hidden state. Defaults to “zeros”.

  • hh_init_mu (float) – The mean of the hidden state initialization when hh_init is random . Defaults to 0.0.

  • hh_init_std (float) – The standard deviation of the hidden state initialization when hh_init is random. Defaults to 1.0.

  • hh_init_seed (int) – The seed of the hidden state initialization when hh_init is random. Defaults to 0.

  • force_dale_law (bool) – Whether to force the Dale’s law in the layer’s weights. Defaults to False.

  • forward_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the forward_sign vector. If it is a float, the forward_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the forward_sign vector.

  • recurrent_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the recurrent_sign vector. If it is a float, the recurrent_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the recurrent_sign vector.

  • sign_activation (Callable) – The activation function used to compute the sign of the weights i.e. the forward_sign and recurrent_sign vectors. Defaults to torch.nn.Tanh.

build() BaseNeuronsLayer

Build the layer. This method must be call after the layer is initialized to make sure that the layer is ready to be used e.g. the input and output size is set, the weights are initialized, etc.

In this method the forward_weights, recurrent_weights and :attr: rec_mask are created and finally the method initialize_weights_() is called.

Returns:

The layer itself.

Return type:

BaseLayer

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]

Create an empty state for the layer. This method must be implemented by the child class.

Parameters:

batch_size (int) – The batch size of the state.

Returns:

The empty state.

Return type:

Tuple[torch.Tensor, …]

property force_dale_law: bool

Get whether to force the Dale’s law.

Returns:

Whether to force the Dale’s law.

forward(inputs: Tensor, state: Tensor | None = None, **kwargs) Tuple[Tensor, Tensor | None]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property forward_sign: Parameter | None

Get the forward sign.

Returns:

The forward sign.

property forward_weights: Parameter

Get the forward weights.

Returns:

The forward weights.

get_forward_weights_data() Tensor

Get the forward weights data.

Returns:

The forward weights data.

get_recurrent_weights_data() Tensor

Get the recurrent weights data.

Returns:

The recurrent weights data.

get_sign_parameters() List[Parameter]

Get the sign parameters.

Returns:

The sign parameters.

get_weights_parameters() List[Parameter]

Get the weights parameters.

Returns:

The weights parameters.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

property recurrent_sign: Parameter | None

Get the recurrent sign.

Returns:

The recurrent sign.

property recurrent_weights: Parameter

Get the recurrent weights.

Returns:

The recurrent weights.

set_forward_weights_data(data: Tensor)

Set the forward weights data.

Parameters:

data – The forward weights data.

set_recurrent_weights_data(data: Tensor)

Set the recurrent weights data.

Parameters:

data – The recurrent weights data.

training: bool

neurotorch.modules.layers.classical module

class neurotorch.modules.layers.classical.Linear(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, device: device | None = None, **kwargs)

Bases: BaseNeuronsLayer

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, device: device | None = None, **kwargs)

Initialize the layer.; See the BaseLayer class for more details.;

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer;

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use a recurrent connection. Default is True.

  • use_rec_eye_mask (bool) – Whether to use a recurrent eye mask. Default is False. This mask will be used to mask to zero the diagonal of the recurrent connection matrix.

  • learning_type (LearningType) – The learning type of the layer. Default is BPTT.

  • dt (float) – The time step of the layer. Default is 1e-3.

  • kwargs – Other keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • hh_init (str) – The initialization method for the hidden state. Defaults to “zeros”.

  • hh_init_mu (float) – The mean of the hidden state initialization when hh_init is random . Defaults to 0.0.

  • hh_init_std (float) – The standard deviation of the hidden state initialization when hh_init is random. Defaults to 1.0.

  • hh_init_seed (int) – The seed of the hidden state initialization when hh_init is random. Defaults to 0.

  • force_dale_law (bool) – Whether to force the Dale’s law in the layer’s weights. Defaults to False.

  • forward_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the forward_sign vector. If it is a float, the forward_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the forward_sign vector.

  • recurrent_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the recurrent_sign vector. If it is a float, the recurrent_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the recurrent_sign vector.

  • sign_activation (Callable) – The activation function used to compute the sign of the weights i.e. the forward_sign and recurrent_sign vectors. Defaults to torch.nn.Tanh.

build() Linear

Build the layer. This method must be call after the layer is initialized to make sure that the layer is ready to be used e.g. the input and output size is set, the weights are initialized, etc.

In this method the forward_weights, recurrent_weights and :attr: rec_mask are created and finally the method initialize_weights_() is called.

Returns:

The layer itself.

Return type:

BaseLayer

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]

Create an empty state for the layer. This method must be implemented by the child class.

Parameters:

batch_size (int) – The batch size of the state.

Returns:

The empty state.

Return type:

Tuple[torch.Tensor, …]

extra_repr()

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

training: bool

neurotorch.modules.layers.leaky_integrate module

class neurotorch.modules.layers.leaky_integrate.LILayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: BaseNeuronsLayer

The integration in time of these dynamics is done using the equation (1) inspired by Bellec and al. Bellec et al. [BSS+20].

(1)\[\begin{equation} V_j^{t+\Delta t} = \kappa V_j^{t} + \sum_{i}^N W_{ij}x_i^{t+\Delta t} + b_j \end{equation}\]
(2)\[\begin{equation} \kappa = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

The parameters of the equation (1) are:

  • \(N\) is the number of neurons in the layer.

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(\kappa\) is the decay constant of the synaptic current over time (equation (2)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • bias_weights (torch.nn.Parameter): Bias weights of the layer.

  • kappa (torch.nn.Parameter): Decay constant of the synaptic current over time see equation (2).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, dt: float = 0.001, device: device | None = None, **kwargs)

Initialize the layer.; See the BaseLayer class for more details.;

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer;

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use a recurrent connection. Default is True.

  • use_rec_eye_mask (bool) – Whether to use a recurrent eye mask. Default is False. This mask will be used to mask to zero the diagonal of the recurrent connection matrix.

  • learning_type (LearningType) – The learning type of the layer. Default is BPTT.

  • dt (float) – The time step of the layer. Default is 1e-3.

  • kwargs – Other keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • hh_init (str) – The initialization method for the hidden state. Defaults to “zeros”.

  • hh_init_mu (float) – The mean of the hidden state initialization when hh_init is random . Defaults to 0.0.

  • hh_init_std (float) – The standard deviation of the hidden state initialization when hh_init is random. Defaults to 1.0.

  • hh_init_seed (int) – The seed of the hidden state initialization when hh_init is random. Defaults to 0.

  • force_dale_law (bool) – Whether to force the Dale’s law in the layer’s weights. Defaults to False.

  • forward_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the forward_sign vector. If it is a float, the forward_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the forward_sign vector.

  • recurrent_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the recurrent_sign vector. If it is a float, the recurrent_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the recurrent_sign vector.

  • sign_activation (Callable) – The activation function used to compute the sign of the weights i.e. the forward_sign and recurrent_sign vectors. Defaults to torch.nn.Tanh.

build() LILayer

Build the layer. This method must be call after the layer is initialized to make sure that the layer is ready to be used e.g. the input and output size is set, the weights are initialized, etc.

In this method the forward_weights, recurrent_weights and :attr: rec_mask are created and finally the method initialize_weights_() is called.

Returns:

The layer itself.

Return type:

BaseLayer

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

[membrane potential of shape (batch_size, self.output_size)]

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

training: bool
class neurotorch.modules.layers.leaky_integrate.SpyLILayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: BaseNeuronsLayer

The SpyLI dynamics is a more complex variant of the LI dynamics (class LILayer) allowing it to have a greater power of expression. This variant is also inspired by Neftci Neftci et al. [NMZ19] and also contains two differential equations like the SpyLIF dynamics SpyLIFLayer. The equation (3) presents the synaptic current update equation with euler integration while the equation (4) presents the synaptic potential update.

(3)\[\begin{equation} I_{\text{syn}, j}^{t+\Delta t} = \alpha I_{ ext{syn}, j}^{t} + \sum_{i}^{N} W_{ij}^{\text{rec}} I_{\text{syn}, j}^{t} + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t} \end{equation}\]
(4)\[\begin{equation} V_j^{t+\Delta t} = \beta V_j^t + I_{\text{syn}, j}^{t+\Delta t} + b_j \end{equation}\]
(5)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_{\text{syn}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

(6)\[\begin{equation} \beta = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

SpyTorch library: https://github.com/surrogate-gradient-learning/spytorch.

The variables of the equations (3) and (4) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(I_{\text{syn}, j}^{t}\) is the synaptic current of neuron \(j\) at time \(t\).

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(\alpha\) is the decay constant of the synaptic current over time (equation (5)).

  • \(\beta\) is the decay constant of the membrane potential over time (equation (6)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • alpha (torch.nn.Parameter): Decay constant of the synaptic current over time (equation (5)).

  • beta (torch.nn.Parameter): Decay constant of the membrane potential over time (equation (6)).

  • gamma (torch.nn.Parameter): Slope of the Heaviside function (\(\gamma\)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, dt: float = 0.001, device: device | None = None, **kwargs)

Initialize the layer.; See the BaseLayer class for more details.;

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer;

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use a recurrent connection. Default is True.

  • use_rec_eye_mask (bool) – Whether to use a recurrent eye mask. Default is False. This mask will be used to mask to zero the diagonal of the recurrent connection matrix.

  • learning_type (LearningType) – The learning type of the layer. Default is BPTT.

  • dt (float) – The time step of the layer. Default is 1e-3.

  • kwargs – Other keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • hh_init (str) – The initialization method for the hidden state. Defaults to “zeros”.

  • hh_init_mu (float) – The mean of the hidden state initialization when hh_init is random . Defaults to 0.0.

  • hh_init_std (float) – The standard deviation of the hidden state initialization when hh_init is random. Defaults to 1.0.

  • hh_init_seed (int) – The seed of the hidden state initialization when hh_init is random. Defaults to 0.

  • force_dale_law (bool) – Whether to force the Dale’s law in the layer’s weights. Defaults to False.

  • forward_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the forward_sign vector. If it is a float, the forward_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the forward_sign vector.

  • recurrent_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the recurrent_sign vector. If it is a float, the recurrent_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the recurrent_sign vector.

  • sign_activation (Callable) – The activation function used to compute the sign of the weights i.e. the forward_sign and recurrent_sign vectors. Defaults to torch.nn.Tanh.

build() SpyLILayer

Build the layer. This method must be call after the layer is initialized to make sure that the layer is ready to be used e.g. the input and output size is set, the weights are initialized, etc.

In this method the forward_weights, recurrent_weights and :attr: rec_mask are created and finally the method initialize_weights_() is called.

Returns:

The layer itself.

Return type:

BaseLayer

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

[membrane potential of shape (batch_size, self.output_size), synaptic current of shape (batch_size, self.output_size)]

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

training: bool

neurotorch.modules.layers.spiking module

class neurotorch.modules.layers.spiking.ALIFLayer(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)

Bases: LIFLayer

The ALIF dynamic, inspired by Bellec and textit{al.} Bellec et al. [BSS+20], is very similar to the LIF dynamics (class LIFLayer). In fact, ALIF has exactly the same potential update equation as LIF. The difference comes from the fact that the threshold potential varies with time and neuron input. Indeed, the threshold is increased at each output pulse and is then decreased with a certain rate in order to come back to its starting threshold \(V_{\text{th}}\). The threshold equation from LIFLayer is thus slightly modified by changing \(V_{\text{th}} \to A_j^t\). Thus, the output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is redefined by the equation (27).

(7)\[\begin{equation} z_j^t = H(V_j^t - A_j^t) \end{equation}\]

The update of the activation threshold is then described by (42).

(8)\[\begin{equation} A_j^t = V_{\text{th}} + \beta a_j^t \end{equation}\]

with the adaptation variable \(a_j^t\) described by (43) and \(\beta\) an amplification factor greater than 1 and typically equivalent to \(\beta\approx 1.6\) Bellec et al. [BSS+20].

(9)\[\begin{equation} a_j^{t+1} = \rho a_j + z_j^t \end{equation}\]

With the decay factor \(\rho\) as:

(10)\[\begin{equation} \rho = e^{-\frac{\Delta t}{\tau_a}} \end{equation}\]
Attributes:
  • beta: The amplification factor of the threshold potential \(\beta\).

  • rho: The decay factor of the adaptation variable \(\rho\).

__init__(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)
Keyword Arguments:
  • tau_m (float) – The decay time constant of the membrane potential which is generally 20 ms. See equation (33) .

  • threshold (float) – The activation threshold of the neuron.

  • gamma (float) – The gain of the neuron. The gain will increase the gradient of the neuron’s output.

  • spikes_regularization_factor (float) – The regularization factor of the spikes.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

[[membrane potential of shape (batch_size, self.output_size)] [current threshold of shape (batch_size, self.output_size)] [spikes of shape (batch_size, self.output_size)]]

Parameters:

batch_size (int) – The size of the current batch.

Returns:

The current state.

Return type:

Tuple[torch.Tensor, …]

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
update_regularization_loss(state: Any | None = None, *args, **kwargs) Tensor

Update the regularization loss for this layer. Each update call increments the regularization loss so at the end the regularization loss will be the sum of all calls to this function.

Parameters:

state (Optional[Any]) – The current state of the layer.

Returns:

The updated regularization loss.

Return type:

torch.Tensor

class neurotorch.modules.layers.spiking.BellecLIFLayer(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = True, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisidePhiApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)

Bases: LIFLayer

Layer implementing the LIF neuron model from the paper:

“A solution to the learning dilemma for recurrent networks of spiking neurons” by Bellec et al. (2020) Bellec et al. [BSS+20].

__init__(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = True, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisidePhiApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)
Keyword Arguments:
  • tau_m (float) – The decay time constant of the membrane potential which is generally 20 ms. See equation (33) .

  • threshold (float) – The activation threshold of the neuron.

  • gamma (float) – The gain of the neuron. The gain will increase the gradient of the neuron’s output.

  • spikes_regularization_factor (float) – The regularization factor of the spikes.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neurotorch.modules.layers.spiking.IzhikevichLayer(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection=True, use_rec_eye_mask=True, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt=0.001, device=None, **kwargs)

Bases: BaseNeuronsLayer

Izhikevich p.274

Not usable for now, stay tuned.

__init__(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection=True, use_rec_eye_mask=True, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt=0.001, device=None, **kwargs)

Initialize the layer.; See the BaseLayer class for more details.;

Parameters:
  • input_size (Optional[SizeTypes]) – The input size of the layer;

  • output_size (Optional[SizeTypes]) – The output size of the layer.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use a recurrent connection. Default is True.

  • use_rec_eye_mask (bool) – Whether to use a recurrent eye mask. Default is False. This mask will be used to mask to zero the diagonal of the recurrent connection matrix.

  • learning_type (LearningType) – The learning type of the layer. Default is BPTT.

  • dt (float) – The time step of the layer. Default is 1e-3.

  • kwargs – Other keyword arguments.

Keyword Arguments:
  • regularize (bool) – Whether to regularize the layer. If True, the method update_regularization_loss will be called after each forward pass. Defaults to False.

  • hh_init (str) – The initialization method for the hidden state. Defaults to “zeros”.

  • hh_init_mu (float) – The mean of the hidden state initialization when hh_init is random . Defaults to 0.0.

  • hh_init_std (float) – The standard deviation of the hidden state initialization when hh_init is random. Defaults to 1.0.

  • hh_init_seed (int) – The seed of the hidden state initialization when hh_init is random. Defaults to 0.

  • force_dale_law (bool) – Whether to force the Dale’s law in the layer’s weights. Defaults to False.

  • forward_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the forward_sign vector. If it is a float, the forward_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the forward_sign vector.

  • recurrent_sign (Union[torch.Tensor, float]) – If force_dale_law is True, this parameter will be used to initialize the recurrent_sign vector. If it is a float, the recurrent_sign vector will be initialized with this value as the ration of inhibitory neurons. If it is a tensor, it will be used as the recurrent_sign vector.

  • sign_activation (Callable) – The activation function used to compute the sign of the weights i.e. the forward_sign and recurrent_sign vectors. Defaults to torch.nn.Tanh.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [membrane potential of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

training: bool
class neurotorch.modules.layers.spiking.LIFLayer(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)

Bases: BaseNeuronsLayer

LIF dynamics, inspired by Neftci et al. [NMZ19] , Bellec et al. [BSS+20] , models the synaptic potential and impulses of a neuron over time. The shape of this potential is not considered realistic Izhikevich [Izh07] , but the time at which the potential exceeds the threshold is. This potential is found by the recurrent equation (32) .

(11)\[\begin{equation} V_j^{t+\Delta t} = \left(\alpha V_j^t + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]

The variables of the equation (32) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the potential over time (equation (33) ).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

(12)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_m}} \end{equation}\]

with \(\tau_m\) being the decay time constant of the membrane potential which is generally 20 ms.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (34) .

(13)\[z_j^t = H(V_j^t - V_{\text{th}})\]

where \(V_{\text{th}}\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise.

Attributes:
  • forward_weights (torch.nn.Parameter): The weights used to compute the output of the layer \(W_{ij}^{\text{in}}\) in equation (32).

  • recurrent_weights (torch.nn.Parameter): The weights used to compute the hidden state of the layer \(W_{ij}^{\text{rec}}\) in equation (32).

  • dt (float): The time step of the layer \(\Delta t\) in equation (32).

  • use_rec_eye_mask (bool): Whether to use the recurrent eye mask.

  • rec_mask (torch.Tensor): The recurrent eye mask.

  • alpha (torch.nn.Parameter): The decay constant of the potential over time. See equation (33) .

  • threshold (torch.nn.Parameter): The activation threshold of the neuron.

  • gamma (torch.nn.Parameter): The gain of the neuron. The gain will increase the gradient of the neuron’s output.

__init__(input_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, output_size: int | ~neurotorch.dimension.Dimension | ~typing.Iterable[int | ~neurotorch.dimension.Dimension] | ~neurotorch.dimension.Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, spike_func: ~typing.Type[~neurotorch.modules.spike_funcs.SpikeFunction] = <class 'neurotorch.modules.spike_funcs.HeavisideSigmoidApprox'>, dt: float = 0.001, device: ~torch.device | None = None, **kwargs)
Keyword Arguments:
  • tau_m (float) – The decay time constant of the membrane potential which is generally 20 ms. See equation (33) .

  • threshold (float) – The activation threshold of the neuron.

  • gamma (float) – The gain of the neuron. The gain will increase the gradient of the neuron’s output.

  • spikes_regularization_factor (float) – The regularization factor of the spikes.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

training: bool
update_regularization_loss(state: Any | None = None, *args, **kwargs) Tensor

Update the regularization loss for this layer. Each update call increments the regularization loss so at the end the regularization loss will be the sum of all calls to this function.

Parameters:

state – The current state of the layer.

Returns:

The updated regularization loss.

class neurotorch.modules.layers.spiking.SpyALIFLayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: SpyLIFLayer

The SpyALIF dynamic, inspired by Bellec and textit{al.} Bellec et al. [BSS+20] and bye the SpyLIFLayer from the work of Neftci Neftci et al. [NMZ19], is very similar to the SpyLIF dynamics (class SpyLIFLayer). In fact, SpyALIF has exactly the same potential update equation as SpyLIF. The difference comes from the fact that the threshold potential varies with time and neuron input. Indeed, the threshold is increased at each output spike and is then decreased with a certain rate in order to come back to its starting threshold \(V_{\text{th}}\). The threshold equation from SpyLIFLayer is thus slightly modified by changing \(V_{\text{th}} \to A_j^t\). Thus, the output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is redefined by the equation (27).

(14)\[\begin{equation} I_{\text{syn}, j}^{t+\Delta t} = \alpha I_{ ext{syn}, j}^{t} + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t} \end{equation}\]
(15)\[\begin{equation} V_j^{t+\Delta t} = \left(\beta V_j^t + I_{\text{syn}, j}^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]
(16)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_{\text{syn}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

(17)\[\begin{equation} \beta = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (40) .

(18)\[z_j^t = H(V_j^t - A_j^t)\]

where \(A_j^t\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise. The update of the activation threshold is then described by (42).

(19)\[\begin{equation} A_j^t = V_{\text{th}} + \kappa a_j^t \end{equation}\]

with the adaptation variable \(a_j^t\) described by (43) and \(\kappa\) an amplification factor greater than 1 and typically equivalent to \(\kappa\approx 1.6\) Bellec et al. [BSS+20].

(20)\[\begin{equation} a_j^{t+1} = \rho a_j + z_j^t \end{equation}\]

With the decay factor \(\rho\) as:

(21)\[\begin{equation} \rho = e^{-\frac{\Delta t}{\tau_a}} \end{equation}\]

SpyTorch library: https://github.com/surrogate-gradient-learning/spytorch.

The variables of the equations (36) and (37) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(I_{\text{syn}, j}^{t}\) is the synaptic current of neuron \(j\) at time \(t\).

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the synaptic current over time (equation (47)).

  • \(\beta\) is the decay constant of the membrane potential over time (equation (48)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • alpha (torch.nn.Parameter): Decay constant of the synaptic current over time (equation (38)).

  • beta (torch.nn.Parameter): Decay constant of the membrane potential over time (equation (39)).

  • threshold (torch.nn.Parameter): Activation threshold of the neuron (\(V_{\text{th}}\)).

  • gamma (torch.nn.Parameter): Slope of the Heaviside function (\(\gamma\)).

  • kappa: The amplification factor of the threshold potential (\(\kappa\)).

  • rho: The decay factor of the adaptation variable (\(\rho\)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the SpyLIF layer.

Parameters:
  • input_size (Optional[SizeTypes]) – The size of the input.

  • output_size (Optional[SizeTypes]) – The size of the output.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use the recurrent connection.

  • use_rec_eye_mask (bool) – Whether to use the recurrent eye mask.

  • spike_func (Callable[[torch.Tensor], torch.Tensor]) – The spike function to use.

  • learning_type (LearningType) – The learning type to use.

  • dt (float) – Time step (Euler’s discretisation).

  • device (Optional[torch.device]) – The device to use.

  • kwargs – The keyword arguments for the layer.

Keyword Arguments:
  • tau_syn (float) – The synaptic time constant \(\tau_{\text{syn}}\). Default: 5.0 * dt.

  • tau_mem (float) – The membrane time constant \(\tau_{\text{mem}}\). Default: 10.0 * dt.

  • threshold (float) – The threshold potential \(V_{\text{th}}\). Default: 1.0.

  • gamma (float) – The multiplier of the derivative of the spike function \(\gamma\). Default: 100.0.

  • spikes_regularization_factor (float) – The regularization factor for the spikes. Higher this factor is, the more the network will tend to spike less. Default: 0.0.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [synaptic current of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

reset_regularization_loss()

Reset the regularization loss to zero.

Returns:

None

training: bool
update_regularization_loss(state: Any | None = None, *args, **kwargs) Tensor

Update the regularization loss for this layer. Each update call increments the regularization loss so at the end the regularization loss will be the sum of all calls to this function.

Parameters:

state – The current state of the layer.

Returns:

The updated regularization loss.

class neurotorch.modules.layers.spiking.SpyLIFLayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: BaseNeuronsLayer

The SpyLIF dynamics is a more complex variant of the LIF dynamics (class LIFLayer) allowing it to have a greater power of expression. This variant is also inspired by Neftci Neftci et al. [NMZ19] and also contains two differential equations like the SpyLI dynamics SpyLI. The equation (45) presents the synaptic current update equation with euler integration while the equation (46) presents the synaptic potential update.

(22)\[\begin{equation} I_{\text{syn}, j}^{t+\Delta t} = \alpha I_{ ext{syn}, j}^{t} + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t} \end{equation}\]
(23)\[\begin{equation} V_j^{t+\Delta t} = \left(\beta V_j^t + I_{\text{syn}, j}^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]
(24)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_{\text{syn}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

(25)\[\begin{equation} \beta = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (49) .

(26)\[z_j^t = H(V_j^t - V_{\text{th}})\]

where \(V_{\text{th}}\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise.

SpyTorch library: https://github.com/surrogate-gradient-learning/spytorch.

The variables of the equations (45) and (46) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(I_{\text{syn}, j}^{t}\) is the synaptic current of neuron \(j\) at time \(t\).

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the synaptic current over time (equation (47)).

  • \(\beta\) is the decay constant of the membrane potential over time (equation (48)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • alpha (torch.nn.Parameter): Decay constant of the synaptic current over time (equation (47)).

  • beta (torch.nn.Parameter): Decay constant of the membrane potential over time (equation (48)).

  • threshold (torch.nn.Parameter): Activation threshold of the neuron (\(V_{\text{th}}\)).

  • gamma (torch.nn.Parameter): Slope of the Heaviside function (\(\gamma\)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the SpyLIF layer.

Parameters:
  • input_size (Optional[SizeTypes]) – The size of the input.

  • output_size (Optional[SizeTypes]) – The size of the output.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use the recurrent connection.

  • use_rec_eye_mask (bool) – Whether to use the recurrent eye mask.

  • spike_func (Callable[[torch.Tensor], torch.Tensor]) – The spike function to use.

  • learning_type (LearningType) – The learning type to use.

  • dt (float) – Time step (Euler’s discretisation).

  • device (Optional[torch.device]) – The device to use.

  • kwargs – The keyword arguments for the layer.

Keyword Arguments:
  • tau_syn (float) – The synaptic time constant \(\tau_{\text{syn}}\). Default: 5.0 * dt.

  • tau_mem (float) – The membrane time constant \(\tau_{\text{mem}}\). Default: 10.0 * dt.

  • threshold (float) – The threshold potential \(V_{\text{th}}\). Default: 1.0.

  • gamma (float) – The multiplier of the derivative of the spike function \(\gamma\). Default: 100.0.

  • spikes_regularization_factor (float) – The regularization factor for the spikes. Higher this factor is, the more the network will tend to spike less. Default: 0.0.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [synaptic current of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_weights_()

Initialize the weights of the layer. This method must be implemented by the child class.

Returns:

None

reset_regularization_loss()

Reset the regularization loss to zero.

Returns:

None

training: bool
update_regularization_loss(state: Any | None = None, *args, **kwargs) Tensor

Update the regularization loss for this layer. Each update call increments the regularization loss so at the end the regularization loss will be the sum of all calls to this function.

Parameters:

state – The current state of the layer.

Returns:

The updated regularization loss.

neurotorch.modules.layers.spiking_lpf module

class neurotorch.modules.layers.spiking_lpf.ALIFLayerLPF(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: ALIFLayer

The ALIF dynamic, inspired by Bellec and textit{al.} Bellec et al. [BSS+20], is very similar to the LIF dynamics (class LIFLayer). In fact, ALIF has exactly the same potential update equation as LIF. The difference comes from the fact that the threshold potential varies with time and neuron input. Indeed, the threshold is increased at each output pulse and is then decreased with a certain rate in order to come back to its starting threshold \(V_{\text{th}}\). The threshold equation from LIFLayer is thus slightly modified by changing \(V_{\text{th}} \to A_j^t\). Thus, the output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is redefined by the equation (27).

In this version (LPF), the spikes are filtered with a low pass filter (LPF) described by the equation (50).

(27)\[\begin{equation} z_j^t = H(V_j^t - A_j^t) \end{equation}\]
(28)\[\mathcal{F}_{\text{lpf}-\alpha}(z_j^t) = {\text{lpf}-\alpha} \mathcal{F}_\alpha(z_j^{t-1}) + z_j^t\]

The update of the activation threshold is then described by (42).

(29)\[\begin{equation} A_j^t = V_{\text{th}} + \beta a_j^t \end{equation}\]

with the adaptation variable \(a_j^t\) described by (43) and \(\beta\) an amplification factor greater than 1 and typically equivalent to \(\beta\approx 1.6\) Bellec et al. [BSS+20].

(30)\[\begin{equation} a_j^{t+1} = \rho a_j + z_j^t \end{equation}\]

With the decay factor \(\rho\) as:

(31)\[\begin{equation} \rho = e^{-\frac{\Delta t}{\tau_a}} \end{equation}\]
Attributes:
  • beta: The amplification factor of the threshold potential \(\beta\).

  • rho: The decay factor of the adaptation variable \(\rho\).

  • lpf_alpha (float): Decay constant of the low pass filter over time (equation (50)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the ALIFLayerLPF layer.

Keyword Arguments:

lpf_alpha (float) – The decay constant of the low pass filter over time (equation (50)). Default: np.exp(-dt / tau_mem).

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [current threshold of shape (batch_size, self.output_size)], [low pass filtered spikes of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neurotorch.modules.layers.spiking_lpf.LIFLayerLPF(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: LIFLayer

LIF dynamics, inspired by Neftci et al. [NMZ19] , Bellec et al. [BSS+20] , models the synaptic potential and impulses of a neuron over time. The shape of this potential is not considered realistic Izhikevich [Izh07] , but the time at which the potential exceeds the threshold is. This potential is found by the recurrent equation (32).

In this version (LPF), the spikes are filtered with a low pass filter (LPF) described by the equation (50).

(32)\[\begin{equation} V_j^{t+\Delta t} = \left(\alpha V_j^t + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]

The variables of the equation (32) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the potential over time (equation (33) ).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

(33)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_m}} \end{equation}\]

with \(\tau_m\) being the decay time constant of the membrane potential which is generally 20 ms.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (34) .

(34)\[z_j^t = H(V_j^t - V_{\text{th}})\]

where \(V_{\text{th}}\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise.

(35)\[\mathcal{F}_{\text{lpf}-\alpha}(z_j^t) = {\text{lpf}-\alpha} \mathcal{F}_\alpha(z_j^{t-1}) + z_j^t\]
Attributes:
  • forward_weights (torch.nn.Parameter): The weights used to compute the output of the layer \(W_{ij}^{\text{in}}\) in equation (32).

  • recurrent_weights (torch.nn.Parameter): The weights used to compute the hidden state of the layer \(W_{ij}^{\text{rec}}\) in equation (32).

  • dt (float): The time step of the layer \(\Delta t\) in equation (32).

  • use_rec_eye_mask (bool): Whether to use the recurrent eye mask.

  • rec_mask (torch.Tensor): The recurrent eye mask.

  • alpha (torch.nn.Parameter): The decay constant of the potential over time. See equation (33) .

  • threshold (torch.nn.Parameter): The activation threshold of the neuron.

  • gamma (torch.nn.Parameter): The gain of the neuron. The gain will increase the gradient of the neuron’s output.

  • lpf_alpha (float): Decay constant of the low pass filter over time (equation (50)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the LIFLayerLPF layer.

Keyword Arguments:
  • tau_m (float) – The decay time constant of the membrane potential which is generally 20 ms. See equation (33) .

  • threshold (float) – The activation threshold of the neuron.

  • gamma (float) – The gain of the neuron. The gain will increase the gradient of the neuron’s output.

  • spikes_regularization_factor (float) – The regularization factor of the spikes.

  • lpf_alpha (float) – The decay constant of the low pass filter over time (equation (50)). Default: np.exp(-dt / tau_mem).

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [low pass filtered spikes of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neurotorch.modules.layers.spiking_lpf.SpyALIFLayerLPF(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: SpyALIFLayer

The SpyALIF dynamic, inspired by Bellec and textit{al.} Bellec et al. [BSS+20] and bye the SpyLIFLayer from the work of Neftci Neftci et al. [NMZ19], is very similar to the SpyLIF dynamics (class SpyLIFLayer). In fact, SpyALIF has exactly the same potential update equation as SpyLIF. The difference comes from the fact that the threshold potential varies with time and neuron input. Indeed, the threshold is increased at each output spike and is then decreased with a certain rate in order to come back to its starting threshold \(V_{\text{th}}\). The threshold equation from SpyLIFLayer is thus slightly modified by changing \(V_{\text{th}} \to A_j^t\). Thus, the output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is redefined by the equation (27).

In this version (LPF), the spikes are filtered with a low pass filter (LPF) described by the equation (50).

(36)\[\begin{equation} I_{\text{syn}, j}^{t+\Delta t} = \alpha I_{ ext{syn}, j}^{t} + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t} \end{equation}\]
(37)\[\begin{equation} V_j^{t+\Delta t} = \left(\beta V_j^t + I_{\text{syn}, j}^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]
(38)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_{\text{syn}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

(39)\[\begin{equation} \beta = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (40) .

(40)\[z_j^t = H(V_j^t - A_j^t)\]

where \(A_j^t\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise. The update of the activation threshold is then described by (42).

(41)\[\mathcal{F}_{\text{lpf}-\alpha}(z_j^t) = {\text{lpf}-\alpha} \mathcal{F}_\alpha(z_j^{t-1}) + z_j^t\]
(42)\[\begin{equation} A_j^t = V_{\text{th}} + \kappa a_j^t \end{equation}\]

with the adaptation variable \(a_j^t\) described by (43) and \(\kappa\) an amplification factor greater than 1 and typically equivalent to \(\kappa\approx 1.6\) Bellec et al. [BSS+20].

(43)\[\begin{equation} a_j^{t+1} = \rho a_j + z_j^t \end{equation}\]

With the decay factor \(\rho\) as:

(44)\[\begin{equation} \rho = e^{-\frac{\Delta t}{\tau_a}} \end{equation}\]

SpyTorch library: https://github.com/surrogate-gradient-learning/spytorch.

The variables of the equations (36) and (37) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(I_{\text{syn}, j}^{t}\) is the synaptic current of neuron \(j\) at time \(t\).

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the synaptic current over time (equation (47)).

  • \(\beta\) is the decay constant of the membrane potential over time (equation (48)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • alpha (torch.nn.Parameter): Decay constant of the synaptic current over time (equation (38)).

  • beta (torch.nn.Parameter): Decay constant of the membrane potential over time (equation (39)).

  • threshold (torch.nn.Parameter): Activation threshold of the neuron (\(V_{\text{th}}\)).

  • gamma (torch.nn.Parameter): Slope of the Heaviside function (\(\gamma\)).

  • kappa: The amplification factor of the threshold potential (\(\kappa\)).

  • rho: The decay factor of the adaptation variable (\(\rho\)).

  • lpf_alpha (float): Decay constant of the low pass filter over time (equation (50)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the SpyALIFLayerLPF layer.

Parameters:
  • input_size (Optional[SizeTypes]) – The size of the input.

  • output_size (Optional[SizeTypes]) – The size of the output.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use the recurrent connection.

  • use_rec_eye_mask (bool) – Whether to use the recurrent eye mask.

  • spike_func (Callable[[torch.Tensor], torch.Tensor]) – The spike function to use.

  • learning_type (LearningType) – The learning type to use.

  • dt (float) – Time step (Euler’s discretisation).

  • device (Optional[torch.device]) – The device to use.

  • kwargs – The keyword arguments for the layer.

Keyword Arguments:
  • tau_syn (float) – The synaptic time constant \(\tau_{\text{syn}}\). Default: 5.0 * dt.

  • tau_mem (float) – The membrane time constant \(\tau_{\text{mem}}\). Default: 10.0 * dt.

  • threshold (float) – The threshold potential \(V_{\text{th}}\). Default: 1.0.

  • gamma (float) – The multiplier of the derivative of the spike function \(\gamma\). Default: 100.0.

  • spikes_regularization_factor (float) – The regularization factor for the spikes. Higher this factor is, the more the network will tend to spike less. Default: 0.0.

  • lpf_alpha (float) – The decay constant of the low pass filter over time (equation (50)). Default: np.exp(-dt / tau_mem).

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [synaptic current of shape (batch_size, self.output_size)], [current threshold of shape (batch_size, self.output_size)], [low pass filtered spikes of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neurotorch.modules.layers.spiking_lpf.SpyLIFLayerLPF(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Bases: SpyLIFLayer

The SpyLIF dynamics is a more complex variant of the LIF dynamics (class LIFLayer) allowing it to have a greater power of expression. This variant is also inspired by Neftci Neftci et al. [NMZ19] and also contains two differential equations like the SpyLI dynamics SpyLI. The equation (45) presents the synaptic current update equation with euler integration while the equation (46) presents the synaptic potential update.

In this version (LPF), the spikes are filtered with a low pass filter (LPF) described by the equation (50).

(45)\[\begin{equation} I_{\text{syn}, j}^{t+\Delta t} = \alpha I_{ ext{syn}, j}^{t} + \sum_{i}^{N} W_{ij}^{\text{rec}} z_i^t + \sum_i^{N} W_{ij}^{\text{in}} x_i^{t+\Delta t} \end{equation}\]
(46)\[\begin{equation} V_j^{t+\Delta t} = \left(\beta V_j^t + I_{\text{syn}, j}^{t+\Delta t}\right) \left(1 - z_j^t\right) \end{equation}\]
(47)\[\begin{equation} \alpha = e^{-\frac{\Delta t}{\tau_{\text{syn}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

(48)\[\begin{equation} \beta = e^{-\frac{\Delta t}{\tau_{\text{mem}}}} \end{equation}\]

with \(\tau_{\text{syn}}\) being the decay time constant of the synaptic current.

The output of neuron \(j\) at time \(t\) denoted \(z_j^t\) is defined by the equation (49) .

(49)\[z_j^t = H(V_j^t - V_{\text{th}})\]

where \(V_{\text{th}}\) denotes the activation threshold of the neuron and the function \(H(\cdot)\) is the Heaviside function defined as \(H(x) = 1\) if \(x \geq 0\) and \(H(x) = 0\) otherwise.

(50)\[\mathcal{F}_{\text{lpf}-\alpha}(z_j^t) = {\text{lpf}-\alpha} \mathcal{F}_\alpha(z_j^{t-1}) + z_j^t\]

SpyTorch library: https://github.com/surrogate-gradient-learning/spytorch.

The variables of the equations (45) and (46) are described by the following definitions:

  • \(N\) is the number of neurons in the layer.

  • \(I_{\text{syn}, j}^{t}\) is the synaptic current of neuron \(j\) at time \(t\).

  • \(V_j^t\) is the synaptic potential of the neuron \(j\) at time \(t\).

  • \(\Delta t\) is the integration time step.

  • \(z_j^t\) is the spike of the neuron \(j\) at time \(t\).

  • \(\alpha\) is the decay constant of the synaptic current over time (equation (47)).

  • \(\beta\) is the decay constant of the membrane potential over time (equation (48)).

  • \(W_{ij}^{\text{rec}}\) is the recurrent weight of the neuron \(i\) to the neuron \(j\).

  • \(W_{ij}^{\text{in}}\) is the input weight of the neuron \(i\) to the neuron \(j\).

  • \(x_i^{t}\) is the input of the neuron \(i\) at time \(t\).

Attributes:
  • alpha (torch.nn.Parameter): Decay constant of the synaptic current over time (equation (47)).

  • beta (torch.nn.Parameter): Decay constant of the membrane potential over time (equation (48)).

  • threshold (torch.nn.Parameter): Activation threshold of the neuron (\(V_{\text{th}}\)).

  • gamma (torch.nn.Parameter): Slope of the Heaviside function (\(\gamma\)).

  • lpf_alpha (float): Decay constant of the low pass filter over time (equation (50)).

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, name: str | None = None, use_recurrent_connection: bool = True, use_rec_eye_mask: bool = False, dt: float = 0.001, device: device | None = None, **kwargs)

Constructor for the SpyLIFLayerLPF layer.

Parameters:
  • input_size (Optional[SizeTypes]) – The size of the input.

  • output_size (Optional[SizeTypes]) – The size of the output.

  • name (Optional[str]) – The name of the layer.

  • use_recurrent_connection (bool) – Whether to use the recurrent connection.

  • use_rec_eye_mask (bool) – Whether to use the recurrent eye mask.

  • spike_func (Callable[[torch.Tensor], torch.Tensor]) – The spike function to use.

  • learning_type (LearningType) – The learning type to use.

  • dt (float) – Time step (Euler’s discretisation).

  • device (Optional[torch.device]) – The device to use.

  • kwargs – The keyword arguments for the layer.

Keyword Arguments:
  • tau_syn (float) – The synaptic time constant \(\tau_{\text{syn}}\). Default: 5.0 * dt.

  • tau_mem (float) – The membrane time constant \(\tau_{\text{mem}}\). Default: 10.0 * dt.

  • threshold (float) – The threshold potential \(V_{\text{th}}\). Default: 1.0.

  • gamma (float) – The multiplier of the derivative of the spike function \(\gamma\). Default: 100.0.

  • spikes_regularization_factor (float) – The regularization factor for the spikes. Higher this factor is, the more the network will tend to spike less. Default: 0.0.

  • lpf_alpha (float) – The decay constant of the low pass filter over time (equation (50)). Default: np.exp(-dt / tau_mem).

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor, ...]
Create an empty state in the following form:

([membrane potential of shape (batch_size, self.output_size)], [synaptic current of shape (batch_size, self.output_size)], [low pass filtered spikes of shape (batch_size, self.output_size)], [spikes of shape (batch_size, self.output_size)])

Parameters:

batch_size – The size of the current batch.

Returns:

The current state.

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

neurotorch.modules.layers.wilson_cowan module

class neurotorch.modules.layers.wilson_cowan.WilsonCowanCURBDLayer(*args, **kwargs)

Bases: WilsonCowanLayer

__init__(*args, **kwargs)
Parameters:
  • input_size (Optional[SizeTypes]) – size of the input

  • output_size (Optional[SizeTypes]) – size of the output If we are predicting time series -> input_size = output_size

  • learning_type (LearningType) – Type of learning for the gradient descent

  • dt (float) – Time step (Euler’s discretisation)

  • device (torch.device) – device for computation

  • kwargs – Additional parameters for the Wilson-Cowan dynamic.

Keyword Arguments:
  • forward_weights (Union[torch.Tensor, np.ndarray]) – Forward weights of the layer.

  • std_weight (float) – Instability of the initial random matrix.

  • mu (Union[float, torch.Tensor]) – Activation threshold. If torch.Tensor -> shape (1, number of neurons).

  • mean_mu (float) – Mean of the activation threshold (if learn_mu is True).

  • std_mu (float) – Standard deviation of the activation threshold (if learn_mu is True).

  • learn_mu (bool) – Whether to train the activation threshold.

  • tau (float) – Decay constant of RNN unit.

  • learn_tau (bool) – Whether to train the decay constant.

  • r (float) – Transition rate of the RNN unit. If torch.Tensor -> shape (1, number of neurons).

  • mean_r (float) – Mean of the transition rate (if learn_r is True).

  • std_r (float) – Standard deviation of the transition rate (if learn_r is True).

  • learn_r (bool) – Whether to train the transition rate.

Remarks: Parameter mu and r can only be a parameter as a vector.

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs) Tuple[Tensor, Tuple[Tensor]]

Forward pass. With Euler discretisation, Wilson-Cowan equation becomes:

output = input * (1 - dt/tau) + dt/tau * (1 - input @ r) * sigmoid(input @ forward_weight - mu)

Parameters:
  • inputs (torch.Tensor) – time series at a time t of shape (batch_size, number of neurons) Remark: if you use to compute a time series, use batch_size = 1.

  • state (Optional[Tuple[torch.Tensor, ...]]) – State of the layer (only for SNN -> not use for RNN)

Returns:

(time series at a time t+1, State of the layer -> None)

Return type:

Tuple[torch.Tensor, Tuple[torch.Tensor, …]]

training: bool
class neurotorch.modules.layers.wilson_cowan.WilsonCowanLayer(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, dt: float = 0.001, use_recurrent_connection: bool = False, device=None, **kwargs)

Bases: BaseNeuronsLayer

This layer is use for Wilson-Cowan neuronal dynamics. This dynamic is also referred to as firing rate model. Wilson-Cowan dynamic is great for neuronal calcium activity. This layer use recurrent neural network (RNN). The number of parameters that are trained is N^2 (+2N if mu and r is train) where N is the number of neurons.

For references, please read:

  • Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons Wilson and Cowan [WC72]

  • Beyond Wilson-Cowan dynamics: oscillations and chaos without inhibitions Painchaud et al. [PDD22]

  • Neural Network dynamic Vogels et al. [VRA05].

The Wilson-Cowan dynamic is one of many dynamical models that can be used to model neuronal activity. To explore more continuous and Non-linear dynamics, please read Nonlinear Neural Network: Principles, Mechanisms, and Architecture Grossberg [Gro88].

__init__(input_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, output_size: int | Dimension | Iterable[int | Dimension] | Size | None = None, dt: float = 0.001, use_recurrent_connection: bool = False, device=None, **kwargs)
Parameters:
  • input_size (Optional[SizeTypes]) – size of the input

  • output_size (Optional[SizeTypes]) – size of the output If we are predicting time series -> input_size = output_size

  • learning_type (LearningType) – Type of learning for the gradient descent

  • dt (float) – Time step (Euler’s discretisation)

  • device (torch.device) – device for computation

  • kwargs – Additional parameters for the Wilson-Cowan dynamic.

Keyword Arguments:
  • forward_weights (Union[torch.Tensor, np.ndarray]) – Forward weights of the layer.

  • std_weight (float) – Instability of the initial random matrix.

  • mu (Union[float, torch.Tensor]) – Activation threshold. If torch.Tensor -> shape (1, number of neurons).

  • mean_mu (float) – Mean of the activation threshold (if learn_mu is True).

  • std_mu (float) – Standard deviation of the activation threshold (if learn_mu is True).

  • learn_mu (bool) – Whether to train the activation threshold.

  • tau (float) – Decay constant of RNN unit.

  • learn_tau (bool) – Whether to train the decay constant.

  • r (float) – Transition rate of the RNN unit. If torch.Tensor -> shape (1, number of neurons).

  • mean_r (float) – Mean of the transition rate (if learn_r is True).

  • std_r (float) – Standard deviation of the transition rate (if learn_r is True).

  • learn_r (bool) – Whether to train the transition rate.

Remarks: Parameter mu and r can only be a parameter as a vector.

create_empty_state(batch_size: int = 1, **kwargs) Tuple[Tensor]

Create an empty state for the layer. This method must be implemented by the child class.

Parameters:

batch_size (int) – The batch size of the state.

Returns:

The empty state.

Return type:

Tuple[torch.Tensor, …]

forward(inputs: Tensor, state: Tuple[Tensor, ...] | None = None, **kwargs) Tuple[Tensor, Tuple[Tensor]]

Forward pass. With Euler discretisation, Wilson-Cowan equation becomes:

output = input * (1 - dt/tau) + dt/tau * (1 - input @ r) * sigmoid(input @ forward_weight - mu)

Parameters:
  • inputs (torch.Tensor) – time series at a time t of shape (batch_size, number of neurons) Remark: if you use to compute a time series, use batch_size = 1.

  • state (Optional[Tuple[torch.Tensor, ...]]) – State of the layer (only for SNN -> not use for RNN)

Returns:

(time series at a time t+1, State of the layer -> None)

Return type:

Tuple[torch.Tensor, Tuple[torch.Tensor, …]]

initialize_weights_()

Initialize the parameters (weights) that will be trained.

property r

This property is used to ensure that the transition rate will never be negative if trained.

property tau

This property is used to ensure that the decay constant will never be negative if trained.

training: bool

Module contents

class neurotorch.modules.layers.LayerType(value)

Bases: Enum

An enumeration.

ALIF = 1
Izhikevich = 2
LI = 3
LIF = 0
SpyALIF = 6
SpyLI = 5
SpyLIF = 4
classmethod from_str(name: str) LayerType | None

Get the LayerType from a string.

Parameters:

name (str) – The name of the LayerType.

Returns:

The LayerType.

Return type:

Optional[LayerType]