lr_scheduler: <list[list[LRScheduler]] (Optional)
Description
A list of potential learning rate schedulers for AutoML to explore. A learning rate scheduler strategy can be empty (in which case no learning rate scheduling is applied), or can be configured through three parameters:name, interval and kwargs:
-
name: The name of the learning rate scheduler strategy. Valid learning rate scheduler strategies are:constant_with_warmup: Uses a constant learning rate preceded by a warmup period which increases the learning rate from 0 tobase_lr. Number of warmup steps can be specified throughkwargsviawarmup_ratio_or_steps.linear_with_warmup: Decays the learning rate linearly frombase_lrto0, preceded by a warmup period which increases the learning rate from 0 tobase_lr. Number of warmup steps can be specified throughkwargsviawarmup_ratio_or_steps.exponential: Decays the learning rate bygamma.gammacan be specified throughkwargs.cosine_with_warmup: Adjusts the learning rate betweenbase_lrand0following a cosine function, preceded by a warmup period which increases the learning rate from 0 tobase_lr. Number of warmup steps can be specified throughkwargsviawarmup_ratio_or_steps.cosine_with_warmup_restarts: Adjusts the learning rate betweenbase_lrand0following a cosine function, with several hard restarts. Preceded by a warmup period which increases the learning rate from0tobase_lr. Number of hard restarts can be configured throughkwargsvianum_cycles(3by default). Number of warmup steps can be specified throughkwargsviawarmup_ratio_or_steps.
-
interval: Specifies whether learning rate scheduling is applied per optimization step (step) or per epoch (epoch). -
kwargs: Additional arguments depending on the chosen LR scheduler strategy. See above for detailed information.
Supported Task Types
- All