optimizer
- class GradientDescent(learning_rate: float, convergence_tolerance: float, max_cycle: int)[source]
Simple gradient descent optimizer.
- __init__(learning_rate: float, convergence_tolerance: float, max_cycle: int)[source]
Initialize the gradient descent optimizer.
- Parameters:
max_cycle – Maximum number of optimization cycles.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
learning_rate – The learning rate.
- class Optimizer[source]
Base class for optimization algorithms for density optimization.
- abstractmethod optimize(sample: OFData, energy_functional: Callable[[OFData], tuple[Energies, Tensor]], callback: Callable | None = None, disable_pbar: bool = False) tuple[Energies, bool][source]
Perform density optimization.
- Parameters:
sample – The OFData containing the initial coefficients.
energy_functional – Callable which returns the energy and gradient vector.
callback – Optional callback function.
disable_pbar – Whether to disable the progress bar.
- Returns:
Final energy.
- class SLSQP(max_cycle: int, convergence_tolerance: float, grad_scale: float, use_projected_gradient: bool = True)[source]
Wrapper for the SLSQP (Sequential Least Squares Programming) optimizer from scipy.
- __init__(max_cycle: int, convergence_tolerance: float, grad_scale: float, use_projected_gradient: bool = True)[source]
Initialize the SLSQP optimizer.
- Parameters:
max_cycle – Maximum number of optimization cycles. Note that this is not the same as the number of functional evaluations.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
grad_scale – Scaling factor for the gradient vector.
use_projected_gradient – Whether to use the projected gradient for the optimization step.
- class TorchOptimizer(torch_optimizer: Type[Optimizer], convergence_tolerance: float, max_cycle: int, **optimizer_kwargs)[source]
Wrapper for torch optimizers to be used in the optimization loop.
- __init__(torch_optimizer: Type[Optimizer], convergence_tolerance: float, max_cycle: int, **optimizer_kwargs)[source]
Initialize the torch optimizer.
- Parameters:
torch_optimizer – The torch optimizer to use. To be able to apply the optimizer with hydra, the class is partially applied without any arguments.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
max_cycle – Maximum number of optimization cycles.
optimizer_kwargs – Additional keyword arguments for the optimizer.
- class TrustRegionConstrained(max_cycle: int, convergence_tolerance: float, initial_tr_radius: float, initial_constr_penalty: float, use_projected_gradient: bool = True)[source]
Wrapper for the trust region constrained optimizer from scipy.
- __init__(max_cycle: int, convergence_tolerance: float, initial_tr_radius: float, initial_constr_penalty: float, use_projected_gradient: bool = True)[source]
Initialize the trust region constrained optimizer.
- Parameters:
convergence_tolerance
max_cycle – Maximum number of optimization cycles.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
initial_tr_radius – Initial trust radius. Affects the size of the first steps.
initial_constr_penalty – Initial constraint penalty.
use_projected_gradient – Whether to use the projected gradient for the optimization step.
- class VectorAdam(max_cycle: int, learning_rate: float, convergence_tolerance: float, betas: tuple[float, float] = (0.9, 0.999), epsilon: float = 1e-08)[source]
Equivariant version of the Adam optimizer.
- __init__(max_cycle: int, learning_rate: float, convergence_tolerance: float, betas: tuple[float, float] = (0.9, 0.999), epsilon: float = 1e-08)[source]
Initialize the equivariant version of the Adam optimizer.
- Parameters:
max_cycle – Maximum number of optimization cycles.
learning_rate – Learning rate function.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
betas – Exponential decay rates for the moment estimates.
epsilon – Small value to avoid division by zero.
- get_pbar_str(sample: OFData, energy: Energies, gradient_norm: float) str[source]
Return a string for the tqdm progress bar.
- Parameters:
sample – The OFData containing the current coefficients. If the ground state energy is available, the energy difference to the ground state is calculated.
energy – The current energy.
gradient_norm – The norm of the gradient vector.
- Returns:
A string for the tqdm progress bar.
- scipy_functional(coeffs: ndarray, sample: OFData, energy_functional: Callable, convergence_tolerance: float, use_projected_gradient: bool, pbar: tqdm, callback: Callable | None = None) tuple[float, ndarray][source]
Functional for scipy optimizers.
Make the energy functional compatible with scipy optimizers.
- Parameters:
coeffs – Input coefficients.
sample – OFData containing the basis functions and integrals.
energy_functional – Callable which returns the energy and gradient vector.
convergence_tolerance – Optimization stops if the gradient norm is below this value.
use_projected_gradient – Whether to use the projected gradient for the optimization step.
pbar – tqdm progress bar.
callback – Optional callback function.
- Returns:
Energy and gradient vector.