NeuroTorch

neurotorch.regularization package

Submodules

neurotorch.regularization.connectome module

class neurotorch.regularization.connectome.DaleLaw(params: Iterable[Parameter] | Dict[str, Parameter], reference_weights: Iterable[Tensor] | None = None, Lambda: float = 1.0, **dale_kwargs)

Bases: DaleLawL2

__init__(params: Iterable[Parameter] | Dict[str, Parameter], reference_weights: Iterable[Tensor] | None = None, Lambda: float = 1.0, **dale_kwargs)
Parameters:
  • params – Weights matrix to regularize (can be multiple)

  • reference_weights – Reference weights to compare. Must be the same size as the weights. If not provided, the weights will be generated automatically with the dale_kwargs.

  • Lambda – The weight of the regularization. In other words, the coefficient that multiplies the loss.

  • dale_kwargs – kwargs of the Dale’s law.

Keyword Arguments:
  • inh_ratio – ratio of inhibitory connections. Must be between 0 and 1.

  • rho – The connectivity ratio. Must be between 0 and 1. If rho = 1, the tensor will be fully connected.

  • inh_first – If True, the inhibitory neurons will be in the first half of the tensor. If False, the neurons will be shuffled.

  • seed – seed for the random number generator. If None, the seed is not set.

training: bool
class neurotorch.regularization.connectome.DaleLawL2(params: Iterable[Parameter] | Dict[str, Parameter], alpha: float = 0.8, reference_weights: Iterable[Tensor] | None = None, Lambda: float = 1.0, optimizer: Optimizer | None = None, **dale_kwargs)

Bases: BaseRegularization

Regularisation of the connectome to apply Dale’s law and L2. In a nutshell, the Dale’s law stipulate that neurons can either have excitatory or inhibitory connections, not both. The L2 regularisation reduce the energy of the network. This regularisation allow you to follow the Dale’s law and/or L2 depending on the factor alpha. The equation used is showed by (1).

(1)\[\begin{equation} \mathcal{L}_{\text{DaleLawL2}} = \text{Tr}\left( W^T \left(\alpha W - \left(1 - \alpha\right) W_{\text{ref}}\right) \right) \end{equation}\]

In the case where \(\alpha = 0\), the regularisation will only follow the Dale’s law shown by (2).

(2)\[\begin{equation} \mathcal{L}_{\text{DaleLaw}} = -\text{Tr}\left( W^T W_{\text{ref}}\right) \end{equation}\]

In the case where \(\alpha = 1\), the regularisation will only follow the L2 regularisation shown by (3).

(3)\[\begin{equation} \mathcal{L}_{\text{L2}} = \text{Tr}\left( W^T W\right) \end{equation}\]
Attributes:
  • alpha (float): Number between 0 and 1 that favors one of the constraints.

  • dale_kwargs (dict): kwargs of the Dale’s law. See dale_().

  • reference_weights (Iterable[torch.Tensor]): Reference weights to compare. Must be the same size as the weights.

__init__(params: Iterable[Parameter] | Dict[str, Parameter], alpha: float = 0.8, reference_weights: Iterable[Tensor] | None = None, Lambda: float = 1.0, optimizer: Optimizer | None = None, **dale_kwargs)
Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – Weights matrix to regularize (can be multiple)

  • alpha (float) – Number between 0 and 1 that favors one of the constraints. If alpha = 0 -> Only Dale’s law is applied. If alpha = 1 -> Only the reduction of the energy is applied. If 1 < alpha < 0 -> Both Dale’s law and the reduction of the energy are applied with their ratio.

  • reference_weights (Optional[Iterable[torch.Tensor]]) – Reference weights to compare. Must be the same size as the weights. If not provided, the weights will be generated automatically with the dale_kwargs.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

  • dale_kwargs – kwargs of the Dale’s law.

Keyword Arguments:
  • inh_ratio (float) – ratio of inhibitory connections. Must be between 0 and 1.

  • rho (float) – The connectivity ratio. Must be between 0 and 1. If rho = 1, the tensor will be fully connected.

  • inh_first (bool) – If True, the inhibitory neurons will be in the first half of the tensor. If False, the neurons will be shuffled.

  • seed (Optional[int]) – seed for the random number generator. If None, the seed is not set.

forward(*args, **kwargs) Tensor

Compute the forward pass of the Dale’s law. If alpha = 1 and the reference weights is not provided, it will be modified to 0, so it can get cancel.

Parameters:
  • args – weights matrix

  • kwargs – kwargs of the forward pass

training: bool
class neurotorch.regularization.connectome.ExcRatioTargetRegularization(params: Iterable[Parameter] | Dict[str, Parameter], exc_target_ratio: float = 0.8, Lambda: float = 1.0, **kwargs)

Bases: BaseRegularization

Applies the function:

\[\text{loss}(x) = \lambda \cdot \sum_{i=1}^N \left|(\text{mean}(\text{sign}(x_i)) + 1) - 2\cdot\text{target} \right|\]

Where \(x\) is the list of input parameters, \(N\) is the number of parameter, \(\text{sign}(x_i)\) is the sign of the element \(x_i\), \(\text{mean}(\text{sign}(x_i))\) is the mean of the signs of the elements in the tensor, \(\text{target}\) is the target value and \(\lambda\) is the weight of the regularization.

Examples::
>>> import neurotorch as nt
>>> layer = nt.WilsonCowanLayer(10, 10, force_dale_law=True)
>>> m = ExcRatioTargetRegularization(params=layer.get_sign_parameters(), Lambda=0.1, exc_target_ratio=0.9)
>>> loss = m()
__init__(params: Iterable[Parameter] | Dict[str, Parameter], exc_target_ratio: float = 0.8, Lambda: float = 1.0, **kwargs)

Create a new ExcRatioTargetRegularization.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – Weights matrix to regularize.

  • exc_target_ratio (float) – Target ratio of excitatory neurons. Must be between 0 and 1.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

Keyword Arguments:

kwargs – kwargs of the BaseRegularization.

forward(*args, **kwargs) Tensor

Compute the forward pass of the regularization.

Parameters:
  • args – args of the forward pass.

  • kwargs – kwargs of the forward pass.

Returns:

The loss of the regularization.

Return type:

torch.Tensor

get_params_exc_ratio() List[float]

Returns the excitatory ratio of each parameter.

get_params_inh_ratio() List[float]

Returns the inhibitory ratio of each parameter.

on_pbar_update(trainer, **kwargs) dict

Called when the progress bar is updated.

Parameters:
  • trainer (Trainer) – The trainer.

  • kwargs – Additional arguments.

Returns:

None

training: bool
class neurotorch.regularization.connectome.InhRatioTargetRegularization(params: Iterable[Parameter] | Dict[str, Parameter], inh_target_ratio: float = 0.2, Lambda: float = 1.0)

Bases: ExcRatioTargetRegularization

Applies the ExcRatioTargetRegularization with the target ratio of inhibitory neurons.

__init__(params: Iterable[Parameter] | Dict[str, Parameter], inh_target_ratio: float = 0.2, Lambda: float = 1.0)

Create a new InhRatioTargetRegularization.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – Weights matrix to regularize.

  • inh_target_ratio (float) – Target ratio of inhibitory neurons. Must be between 0 and 1.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

on_pbar_update(trainer, **kwargs) dict

Called when the progress bar is updated.

Parameters:
  • trainer (Trainer) – The trainer.

  • kwargs – Additional arguments.

Returns:

None

training: bool
class neurotorch.regularization.connectome.WeightsDistance(params: Iterable[Parameter] | Dict[str, Parameter], reference_weights: Iterable[Tensor], Lambda: float = 1.0, p: int = 1)

Bases: BaseRegularization

__init__(params: Iterable[Parameter] | Dict[str, Parameter], reference_weights: Iterable[Tensor], Lambda: float = 1.0, p: int = 1)

Constructor of the BaseRegularization class.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – The parameters which are regularized.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

forward(*args, **kwargs) Tensor

Compute the forward pass of the regularization.

Parameters:
  • args – args of the forward pass.

  • kwargs – kwargs of the forward pass.

Returns:

The loss of the regularization.

Return type:

torch.Tensor

training: bool

Module contents

class neurotorch.regularization.BaseRegularization(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, optimizer: Optimizer | None = None, **kwargs)

Bases: Module, BaseCallback

Base class for regularization.

Attributes:
  • params (torch.nn.ParameterList): The parameters which are regularized.

  • Lambda (float): The weight of the regularization. In other words, the coefficient that multiplies the loss.

__call__(*args, **kwargs) Tensor

Call the forward pass of the regularization and scale it by the Lambda attribute.

Parameters:
  • args – args of the forward pass.

  • kwargs – kwargs of the forward pass.

Returns:

The loss of the regularization.

Return type:

torch.Tensor

__init__(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, optimizer: Optimizer | None = None, **kwargs)

Constructor of the BaseRegularization class.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – The parameters which are regularized.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

forward(*args, **kwargs) Tensor

Compute the forward pass of the regularization.

Parameters:
  • args – args of the forward pass.

  • kwargs – kwargs of the forward pass.

Returns:

The loss of the regularization.

Return type:

torch.Tensor

on_optimization_end(trainer, **kwargs)

Called when the optimization phase of an iteration ends. The optimization phase is defined as the moment where the model weights are updated.

Parameters:

trainer (Trainer) – The trainer.

Returns:

None

training: bool
class neurotorch.regularization.L1(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, **kwargs)

Bases: Lp

Regularization that applies L1 norm.

__init__(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, **kwargs)

Constructor of the L1 class.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – The parameters which are regularized.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

training: bool
class neurotorch.regularization.L2(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, **kwargs)

Bases: Lp

Regularization that applies L2 norm.

__init__(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, **kwargs)

Constructor of the L2 class.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – The parameters which are regularized.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

training: bool
class neurotorch.regularization.Lp(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, p: int = 1, **kwargs)

Bases: BaseRegularization

Regularization that applies LP norm.

Attributes:
  • p (int): The p parameter of the LP norm. Example: p=1 -> L1 norm, p=2 -> L2 norm.

Note:

0D parameters are not regularized.

__init__(params: Iterable[Parameter] | Dict[str, Parameter], Lambda: float = 1.0, p: int = 1, **kwargs)

Constructor of the L1 class.

Parameters:
  • params (Union[Iterable[torch.nn.Parameter], Dict[str, torch.nn.Parameter]]) – The parameters which are regularized.

  • Lambda (float) – The weight of the regularization. In other words, the coefficient that multiplies the loss.

  • p (int) – The p parameter of the LP norm. Example: p=1 -> L1 norm, p=2 -> L2 norm.

forward(*args, **kwargs) Tensor

Compute the forward pass of the regularization.

Parameters:
  • args – args of the forward pass

  • kwargs – kwargs of the forward pass

Returns:

The loss of the regularization.

Return type:

torch.Tensor

training: bool
class neurotorch.regularization.RegularizationList(regularizations: Iterable[BaseRegularization] | None = None, optimizer: Optimizer | None = None, **kwargs)

Bases: BaseRegularization

Regularization that applies a list of regularization.

Attributes:
  • regularizations (Iterable[BaseRegularization]): The regularizations to apply.

__init__(regularizations: Iterable[BaseRegularization] | None = None, optimizer: Optimizer | None = None, **kwargs)

Constructor of the RegularizationList class.

Parameters:

regularizations (Optional[Iterable[BaseRegularization]]) – The regularizations to apply.

forward(*args, **kwargs) Tensor

Compute the forward pass of the regularization.

Parameters:
  • args – args of the forward pass.

  • kwargs – kwargs of the forward pass.

training: bool