draugr.torch_utilities.optimisation.scheduling.lr_scheduler.WarmupMultiStepLR

class draugr.torch_utilities.optimisation.scheduling.lr_scheduler.WarmupMultiStepLR(optimiser, milestones, gamma=0.1, warmup_factor=0.3333333333333333, warmup_iters=500, last_epoch=- 1)[source]

Bases: _LRScheduler

description

__init__(optimiser, milestones, gamma=0.1, warmup_factor=0.3333333333333333, warmup_iters=500, last_epoch=- 1)[source]

Methods

__init__(optimiser, milestones[, gamma, ...])

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()

return

load_state_dict(state_dict)

Loads the schedulers state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

step([epoch])

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()[source]
Returns

Return type

load_state_dict(state_dict)

Loads the schedulers state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.