Loss

class metatrain.utils.loss.TensorMapLoss(reduction: str = 'mean', weight: float = 1.0, gradient_weights: Dict[str, float] | None = None, sliding_factor: float | None = None, type: str | dict = 'mse')[source]

Bases: object

A loss function that operates on two metatensor.torch.TensorMap.

The loss is computed as the sum of the loss on the block values and the loss on the gradients, with weights specified at initialization.

At the moment, this loss function assumes that all the gradients declared at initialization are present in both TensorMaps.

Parameters:
  • reduction (str) – The reduction to apply to the loss. See torch.nn.MSELoss.

  • weight (float) – The weight to apply to the loss on the block values.

  • gradient_weights (Dict[str, float] | None) – The weights to apply to the loss on the gradients.

  • sliding_factor (float | None) – The factor to apply to the exponential moving average of the “sliding” weights. These are weights that act on different components of the loss (for example, energies and forces), based on their individual recent history. If None, no sliding weights are used in the computation of the loss.

  • type (str | dict) – The type of loss to use. This can be either “mse” or “mae”. A Huber loss can also be requested as a dictionary with the key “huber” and the value must be a dictionary with the key “deltas” and the value must be a dictionary with the keys “values” and the gradient keys. The values of the dictionary must be the deltas to use for the Huber loss.

Returns:

The loss as a zero-dimensional torch.Tensor (with one entry).

class metatrain.utils.loss.TensorMapDictLoss(weights: Dict[str, float], sliding_factor: float | None = None, reduction: str = 'mean', type: str | dict = 'mse')[source]

Bases: object

A loss function that operates on two Dict[str, metatensor.torch.TensorMap].

At initialization, the user specifies a list of keys to use for the loss, along with a weight for each key.

The loss is then computed as a weighted sum. Any keys that are not present in the dictionaries are ignored.

Parameters:
  • weights (Dict[str, float]) – A dictionary mapping keys to weights. This might contain gradient keys, in the form <output_name>_<gradient_name>_gradients.

  • sliding_factor (float | None) – The factor to apply to the exponential moving average of the “sliding” weights. These are weights that act on different components of the loss (for example, energies and forces), based on their individual recent history. If None, no sliding weights are used in the computation of the loss.

  • reduction (str) – The reduction to apply to the loss. See torch.nn.MSELoss.

  • type (str | dict)

Returns:

The loss as a zero-dimensional torch.Tensor (with one entry).

metatrain.utils.loss.get_sliding_weights(losses: Dict[str, _Loss], sliding_factor: float, targets: TensorMap, predictions: TensorMap | None = None, previous_sliding_weights: Dict[str, float] | None = None) Dict[str, float][source]

Compute the sliding weights for the loss function.

The sliding weights are computed as the absolute difference between the predictions and the targets.

Parameters:
Returns:

The sliding weights.

Return type:

Dict[str, float]