optimizer

openchem_optimizer

class optimizer.openchem_optimizer.OpenChemOptimizer(params, model_params)[source]

Bases: object

get_lr()[source]

Return the current learning rate.

load_state_dict(state_dict)[source]

Load an optimizer state dict. In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.

property optimizer
property param_groups
set_lr(lr)[source]

Set the learning rate.

state_dict()[source]

Return the optimizer’s state dict.

step(closure=None)[source]

Performs a single optimization step.

zero_grad()[source]

Clears the gradients of all optimized parameters.

openchem_lr_scheduler

class optimizer.openchem_lr_scheduler.OpenChemLRScheduler(params, optimizer)[source]

Bases: object

property by_iteration

If true, make step every iteration, else every epoch (default)

property scheduler
step()[source]

Performs a single scheduler step.