Optimizers¶

class
lumopt.optimizers.generic_optimizers.
ScipyOptimizers
(max_iter, method='LBFGSG', scaling_factor=1.0, pgtol=1e05, ftol=1e12, target_fom=0, scale_initial_gradient_to=None)¶ Wrapper for the optimizers in SciPy’s optimize package:
Some of the optimization algorithms available in the optimize package (‘LBFGSG’ in particular) can approximate the Hessian from the different optimization steps (also called QuasiNewton Optimization). While this is very powerfull, the figure of merit gradient calculated from a simulation using a continuous adjoint method can be noisy. This can point QuasiNewton methods in the wrong direction, so use them with caution.
Parameters:  max_iter – maximum number of iterations; each iteration can make multiple figure of merit and gradient evaluations.
 method – string with the chosen minimization algorithm.
 scaling_factor – scalar or a vector of the same length as the optimization parameters; typically used to scale the optimization parameters so that they have magnitudes in the range zero to one.
 pgtol – projected gradient tolerance paramter ‘gtol’ (see ‘BFGS’ or ‘LBFGSG’ documentation).
 ftol – tolerance paramter ‘ftol’ which allows to stop optimization when changes in the FOM are less than this
 target_fom – A target value for the figure of merit. This allows to print/plot the distance of the current design from a target value
 scale_initial_gradient_to –

class
lumopt.optimizers.fixed_step_gradient_descent.
FixedStepGradientDescent
(max_dx, max_iter, all_params_equal, noise_magnitude, scaling_factor)¶ Gradient descent with the option to add noise and a parameter scaling. The update equation is:
Delta p_i =rac{ rac{dFOM}{dp_i}}{max_j( rac{dFOM}{dp_j})}Delta x + ext{noise}_i
If all_params_equal = True, then the update equation is:
Delta p_i = sign(rac{dFOM}{dp_i})Delta x + ext{noise}_i
If the optimization has many local optima: noise = rand([1,1])*noise_magnitude.
param max_dx: maximum allowed change of a parameter per iteration. param max_iter: maximum number of iterations to run. param all_params_equal: if true, all parameters will be changed by +/ dx depending on the sign of their associated shape derivative. param noise_magnitude: amplitude of the noise. param scaling_factor: scalar or vector of the same length as the optimization parameters; typically used to scale the optimization parameters so that they have magnitudes in the range zero to one.

class
lumopt.optimizers.adaptive_gradient_descent.
AdaptiveGradientDescent
(max_dx, min_dx, max_iter, dx_regrowth_factor, all_params_equal, scaling_factor)¶ Almost identical to FixedStepGradientDescent, except that dx changes according to the following rule:
dx = min(max_dx,dx*dx_regrowth_factor) while newfom < oldfom dx = dx / 2 if dx < min_dx:
dx = min_dx return newfomParameters:  max_dx – maximum allowed change of a parameter per iteration.
 min_dx – minimum step size (for the largest parameter changing) allowed.
 dx_regrowth_factor – by how much dx will be increased at each iteration.
 max_iter – maximum number of iterations to run.
 all_params_equal – if true, all parameters will be changed by +/ dx depending on the sign of their associated shape derivative.
 scaling_factor – scalar or vector of the same length as the optimization parameters; typically used to scale the optimization parameters so that they have magnitudes in the range zero to one.