Shortcuts

LinearCyclicalScheduler#

class ignite.handlers.param_scheduler.LinearCyclicalScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1.0, start_value_mult=1.0, end_value_mult=1.0, save_history=False, param_group_index=None)[source]#

Linearly adjusts param value to ‘end_value’ for a half-cycle, then linearly adjusts it back to ‘start_value’ for a half-cycle.

Parameters
  • optimizer (torch.optim.optimizer.Optimizer) – torch optimizer or any object with attribute param_groups as a sequence.

  • param_name (str) – name of optimizer’s parameter to update.

  • start_value (float) – value at start of cycle.

  • end_value (float) – value at the middle of the cycle.

  • cycle_size (int) – length of cycle.

  • cycle_mult (float) – ratio by which to change the cycle_size at the end of each cycle (default=1).

  • start_value_mult (float) – ratio by which to change the start value at the end of each cycle (default=1.0).

  • end_value_mult (float) – ratio by which to change the end value at the end of each cycle (default=1.0).

  • save_history (bool) – whether to log the parameter values to engine.state.param_history, (default=False).

  • param_group_index (Optional[int]) – optimizer’s parameters group to use.

Note

If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.

Examples:

from ignite.handlers.param_scheduler import LinearCyclicalScheduler

scheduler = LinearCyclicalScheduler(optimizer, 'lr', 1e-3, 1e-1, len(train_loader))
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
#
# Linearly increases the learning rate from 1e-3 to 1e-1 and back to 1e-3
# over the course of 1 epoch
#

New in version 0.5.1.

Methods

get_param

Method to get current optimizer’s parameter values

load_state_dict

Copies parameters from state_dict into this ParamScheduler.

plot_values

Method to plot simulated scheduled values during num_events events.

simulate_values

Method to simulate scheduled values during num_events events.

state_dict

Returns a dictionary containing a whole state of ParamScheduler.

get_param()[source]#

Method to get current optimizer’s parameter values

Returns

list of params, or scalar param

Return type

float

load_state_dict(state_dict)#

Copies parameters from state_dict into this ParamScheduler.

Parameters

state_dict (Mapping) – a dict containing parameters.

Return type

None

classmethod plot_values(num_events, **scheduler_kwargs)#

Method to plot simulated scheduled values during num_events events.

This class requires matplotlib package to be installed:

pip install matplotlib
Parameters
  • num_events (int) – number of events during the simulation.

  • scheduler_kwargs (Mapping) – parameter scheduler configuration kwargs.

Returns

matplotlib.lines.Line2D

Return type

Any

Examples

import matplotlib.pylab as plt

plt.figure(figsize=(10, 7))
LinearCyclicalScheduler.plot_values(num_events=50, param_name='lr',
                                    start_value=1e-1, end_value=1e-3, cycle_size=10))
classmethod simulate_values(num_events, **scheduler_kwargs)#

Method to simulate scheduled values during num_events events.

Parameters
  • num_events (int) – number of events during the simulation.

  • scheduler_kwargs (Any) – parameter scheduler configuration kwargs.

Returns

event_index, value

Return type

List[List[int]]

Examples:

lr_values = np.array(LinearCyclicalScheduler.simulate_values(num_events=50, param_name='lr',
                                                             start_value=1e-1, end_value=1e-3,
                                                             cycle_size=10))

plt.plot(lr_values[:, 0], lr_values[:, 1], label="learning rate")
plt.xlabel("events")
plt.ylabel("values")
plt.legend()
state_dict()#

Returns a dictionary containing a whole state of ParamScheduler.

Returns

a dictionary containing a whole state of ParamScheduler

Return type

dict