draugr.torch_utilities.optimisation.debugging.gradients.checking.numerical_gradient.loss_grad_check¶
- draugr.torch_utilities.optimisation.debugging.gradients.checking.numerical_gradient.loss_grad_check(model: Module, loss_fn: callable, iinput: Tensor, target: Tensor, epsilon: float = 1e-06, error_tolerance: float = 1e-05) None [source]¶
two sided gradient numerical approximation DOES not work, please refer to torch/autograd/gradcheck.py
- Parameters
iinput –
target –
error_tolerance –
model –
loss_fn –
epsilon –
- Returns
- Return type