NeuroTorch

neurotorch.utils package

Submodules

neurotorch.utils.autograd module

neurotorch.utils.autograd.compute_jacobian(*, model: Module | None = None, params: Iterable[Parameter] | None = None, x: Tensor | None = None, y: Tensor | None = None, strategy: str = 'slow')

Compute the jacobian of the model with respect to the parameters.

# TODO: check https://medium.com/@monadsblog/pytorch-backward-function-e5e2b7e60140 # TODO: see https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py

Parameters:
  • model – The model to compute the jacobian.

  • params – The parameters to compute the jacobian with respect to. If None, compute the jacobian with respect to all the parameters of the model.

  • x – The input to compute the jacobian. If None, use y instead.

  • y – The output to compute the jacobian. If None, use x instead.

  • strategy – The strategy to use to compute the jacobian. Can be “slow” or “fast”. At this time the only strategy implemented is “slow”.

Returns:

The jacobian.

neurotorch.utils.autograd.dy_dw_local(y: Tensor, params: Sequence[Parameter], grad_outputs: Tensor | None = None, retain_graph: bool = True, allow_unused: bool = True) List[Tensor]

Compute the derivative of z with respect to the parameters using torch.autograd.grad. If a parameter not requires grad, the derivative is set to zero.

Parameters:
  • y (torch.Tensor) – The tensor to compute the derivative.

  • params (Sequence[torch.nn.Parameter]) – The parameters to compute the derivative with respect to.

  • grad_outputs (torch.Tensor or None) – The gradient of the output. If None, use a tensor of ones.

  • retain_graph (bool) – If True, the graph used to compute the grad will be retained.

  • allow_unused (bool) – If True, allow the computation of the derivative with respect to a parameter that is not used in the computation of z.

Returns:

The derivative of z with respect to the parameters.

Return type:

List[torch.Tensor]

neurotorch.utils.autograd.filter_parameters(parameters: Sequence[Parameter] | ParameterList, requires_grad: bool = True) List[Parameter]

Filter the parameters by their requires_grad attribute.

Parameters:
  • parameters – The parameters to filter.

  • requires_grad – The value of the requires_grad attribute to filter.

Returns:

The filtered parameters.

neurotorch.utils.autograd.get_contributing_params(y, top_level=True)

Get the parameters that contribute to the computation of y.

Taken from “https://stackoverflow.com/questions/72301628/find-pytorch-model-parameters-that-dont-contribute-to-loss”.

Parameters:
  • y – The tensor to compute the contribution of the parameters.

  • top_level (bool) – Whether y is a top level tensor or not.

Returns:

A generator of the parameters that contribute to the computation of y.

neurotorch.utils.autograd.recursive_detach(tensors: Tensor | Tuple[Tensor] | List[Tensor])
neurotorch.utils.autograd.recursive_detach_(tensors: Tensor | Tuple[Tensor] | List[Tensor])
neurotorch.utils.autograd.vmap(f)
neurotorch.utils.autograd.zero_grad_params(params: Iterable[Parameter])

Set the gradient of the parameters to zero.

Parameters:

params – The parameters to set the gradient to zero.

neurotorch.utils.collections module

neurotorch.utils.collections.get_all_params_combinations(params_space: Dict[str, Any]) List[Dict[str, Any]]

Get all possible combinations of parameters.

Parameters:

params_space – Dictionary of parameters.

Returns:

List of dictionaries of parameters.

neurotorch.utils.collections.get_meta_name(params: Dict[str, Any])
neurotorch.utils.collections.get_meta_str(__obj: Any) str

Get the meta string of an object. The meta string is a string representation of the object that can be used as a file name. All mappings are sorted by keys before being converted to strings.

Parameters:

__obj – The object to get the meta string of.

Returns:

The meta string of the object.

Examples

>>> get_meta_str(1)
'1'
>>> get_meta_str([1, 2, 3])
'1_2_3'
>>> get_meta_str({"a": 1, "b": 2})
'a-1_b-2'
>>> get_meta_str([{"b": 2, "a": 1}, {1: 2, 3: 4}])
'a-1_b-2_1-2_3-4'
>>> class CustomObject:
...     def __repr__(self):
...         return "my_repr"
... get_meta_str([{"b": 2, "a": 1}, {1: 2, 3: 4}, 5, 6, 7, CustomObject()])
'a-1_b-2_1-2_3-4_5_6_7_my_repr'
neurotorch.utils.collections.hash_meta_str(__obj: Any, hash_mth: str = 'md5', out_type: str = 'hex') str | int

Hash an object to get a unique and persistent id. The hash is computed by hashing the string representation of the entry. The string representation is obtained using the function get_meta_str.

Parameters:
  • __obj – The object to hash.

  • hash_mth – The hash method to use. Must be in hashlib.algorithms_available. Default is “md5”.

  • out_type – The type of the output. Must be in [“hex”, “int”]. Default is “hex”.

Returns:

The hash of the object.

neurotorch.utils.collections.hash_params(params: Dict[str, Any])

Hash the parameters to get a unique and persistent id.

Note: This is the old version of hash_params. It is kept for compatibility with old code.

Please use hash_dict instead which is more general and offers more options.

Parameters:

params – The parameters to hash.

Returns:

The hash of the parameters.

neurotorch.utils.collections.list_insert_replace_at(__list: List, idx: int, value: Any)

Insert a value at a specific index. If there is already a value at this index, replace it.

Parameters:
  • __list – The list to modify.

  • idx – The index to insert the value.

  • value – The value to insert.

neurotorch.utils.collections.list_of_callable_to_sequential(callable_list: List[Callable]) Sequential

Convert a list of callable to a list of modules.

Parameters:

callable_list – List of callable.

Returns:

List of modules.

neurotorch.utils.collections.mapping_update_recursively(d, u)

from https://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth

Parameters:
  • d – mapping item that wil be updated

  • u – mapping item updater

Returns:

updated mapping recursively

neurotorch.utils.collections.maybe_unpack_singleton_dict(x: dict | Any) Any

Accept a dict or any other type. If x is a dict with one key and value, the singleton is unpacked. Otherwise, x is returned without being changed. :param x: :return:

neurotorch.utils.collections.save_params(params: Dict[str, Any], save_path: str)

Save the parameters in a file.

Parameters:
  • save_path – The path to save the parameters.

  • params – The parameters to save.

Returns:

The path to the saved parameters.

neurotorch.utils.collections.sequence_get(__sequence: Sequence, idx: int, default: Any | None = None) Any
neurotorch.utils.collections.unpack_out_hh(out)

Unpack the output of a recurrent network.

Parameters:

out – The output of a recurrent network.

Returns:

The output of the recurrent network with the hidden state. If there is no hidden state, consider it as None.

neurotorch.utils.collections.unpack_singleton_dict(x: dict) Any

Unpack a dictionary with a single key and value. If the dict has more than one key, a ValueError is raised. :param x: :return:

neurotorch.utils.formatting module

neurotorch.utils.formatting.format_pred_batch(raw_pred_batch: Tensor | Dict[str, Tensor], y_batch: Tensor | Dict[str, Tensor])

This function format the raw pred batch to the same format as y_batch. For example, if y_batch is a dict, then raw_pred_batch will be converted to a dict. If raw_pred_batch is a tuple or a list, then raw_pred_batch is consider to be in the following format : pred, hidden_state. In this case, only pred will be taken.

Parameters:
  • raw_pred_batch

  • y_batch

Returns:

neurotorch.utils.random module

neurotorch.utils.random.format_pseudo_rn_seed(seed: int | None = None) int

Format the pseudo random number generator seed. If the seed is None, return a pseudo random seed else return the given seed.

Parameters:

seed (int or None) – The seed to format.

Returns:

The formatted seed.

Return type:

int

neurotorch.utils.random.set_seed(seed: int)

Set the seed of the random number generator.

Parameters:

seed – The seed to set.

neurotorch.utils.random.unitary_rn_normal_matrix(n: int, m: int, generator: Generator | None = None) Tensor

neurotorch.utils.visualise module

neurotorch.utils.visualise.legend_without_duplicate_labels_(ax: Axes)
neurotorch.utils.visualise.plot_confusion_matrix(cm, classes)

Module contents

neurotorch.utils.batchwise_temporal_decay(x: Tensor, decay: float = 0.9)

Apply a decay filter to the input tensor along the temporal dimension.

Parameters:
  • x (torch.Tensor) – Input of shape (batch_size, time_steps, …).

  • decay (float) – Decay factor of the filter.

Returns:

Filtered input of shape (batch_size, …).

neurotorch.utils.batchwise_temporal_filter(x: Tensor, decay: float = 0.9)

Apply a low-pass filter to the input tensor along the temporal dimension.

(1)\[\begin{equation}\label{eqn:low-pass-filter} \mathcal{F}_\alpha\qty(x^t) = \alpha\mathcal{F}_\alpha\qty(x^{t-1}) + x^t. \end{equation}\]
Parameters:
  • x (torch.Tensor) – Input of shape (batch_size, time_steps, …).

  • decay (float) – Decay factor of the filter.

Returns:

Filtered input of shape (batch_size, time_steps, …).

neurotorch.utils.batchwise_temporal_recursive_filter(x, decay: float = 0.9)

Apply a low-pass filter to the input tensor along the temporal dimension recursively.

(2)\[\begin{equation}\label{eqn:low-pass-filter} \mathcal{F}_\alpha\qty(x^t) = \alpha\mathcal{F}_\alpha\qty(x^{t-1}) + x^t. \end{equation}\]
Parameters:
  • x (torch.Tensor) – Input of shape (batch_size, time_steps, …).

  • decay (float) – Decay factor of the filter.

Returns:

Filtered input of shape (batch_size, time_steps, …).

neurotorch.utils.clip_tensors_norm_(tensors: Tensor | Iterable[Tensor], max_norm: float, norm_type: float = 2.0, error_if_nonfinite: bool = False) Tensor

Clips norm of an iterable of tensors.

This function is a clone from torch.nn.utils.clip_grad_norm_ with the difference that it works on tensors instead of parameters.

The norm is computed over all tensors together, as if they were concatenated into a single vector.

Parameters:
  • tensors (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have data normalized

  • max_norm (float or int) – max norm of the data

  • norm_type (float or int) – type of the used p-norm. Can be 'inf' for infinity norm.

  • error_if_nonfinite (bool) – if True, an error is thrown if the total norm of the data from parameters is nan, inf, or -inf. Default: False

Returns:

Total norm of the tensors (viewed as a single vector).

neurotorch.utils.linear_decay(init_value, min_value, decay_value, current_itr)
neurotorch.utils.maybe_apply_softmax(x, dim: int = -1)

Apply softmax to x if x is not l1 normalized.

Note:

The input will be cast to tensor bye the transform to_tensor.

Parameters:
  • x – The tensor to apply softmax to.

  • dim – The dimension to apply softmax to.

Returns:

The softmax applied tensor.

neurotorch.utils.ravel_compose_transforms(transform: List | Tuple | Compose | Callable | ModuleList) List[Callable]