torch_functionals
Implementation of functionals in PyTorch.
The code is based on https://github.com/sail-sg/jax_xc and https://gitlab.com/libxc/libxc/-/blob/master/src/gga_k_apbe.c https://gitlab.com/libxc/libxc/-/blob/master/src/maple2c/gga_exc/gga_k_apbe.c
- eval_torch_functionals(coeffs: Tensor, ao: Tensor, grid_weights: Tensor, functionals: list[str]) dict[str, tuple[Tensor, Tensor]][source]
Computes the density and evaluates the given functionals on the grid.
- Parameters:
coeffs – coefficients of the basis functions
ao – atomic orbitals of the molecule in the basis
grid_weights – weights of grid on which to evaluate the functionals, same as used for the ao calculation
functionals – list of functionals to evaluate
- Returns:
List of tuples containing the energy and gradient of the given functionals.
- eval_torch_functionals_blocked(mol: Mole, grid: Grids, coeffs: Tensor, functionals: list[str], pre_computed_aos: Tensor | None = None, max_memory: float = 4000.0)[source]
Evaluate torch functionals on the grid in a blocked fashion to reduce memory usage.
Used in label generation, where based on memory usage the aos are saved or not.
- eval_torch_functionals_blocked_fast(ao: Tensor, grid_weights: Tensor, coeffs: Tensor, functionals: list[str], max_memory: float = 4000.0)[source]
Evaluate torch functionals on precomputed AOs in a blocked fashion to reduce memory usage.
Used in density optimization, where the AOs are precomputed to achieve high speed. The estimated total size in MB is 8 * mol.nao * grid.size * 8 / mega_byte where 8 was empirically determined and 8 comes from the size of a double. The maximum block size is then calculated to fit into the given max_memory.
- Parameters:
ao – Atomic orbitals of the molecule in the basis as a tensor.
grid_weights – Weights of the grid points on which to evaluate the functionals.
coeffs – Coefficients of the basis functions.
functionals – list of functionals to compute.
max_memory – Guess of the maximum memory that should be taken by the aos in MB. Total usage might be higher. Defaults to the pyscf default of 4000MB.
- torch_functional(density_and_gradient: Tensor, functional: str, get_gradient: bool = True, **kwargs) tuple[Tensor, Tensor | None, Tensor | None][source]
Wrapper for the computation of the kinetic energy density of the functional.
Before calling the actual computation, the input is converted to torch tensors and the squared gradient is calculated.
- Parameters:
density_and_gradient – density and density gradient tensor of shape (d, ngrid)
functional – functional to use
get_gradient – whether to return the gradient of the kinetic energy density
kwargs – additional arguments for the functional
- Returns:
tuple of the energy density of the functional divided by the density, gradient of the kinetic energy density of the functional wrt. the density rho on the grid and gradient of the kinetic energy density of the functional wrt. \(\sigma = |\nabla \rho|^2\) on the grid