Timestep Optimization¶
The choice of timestep \(\Delta t\) and relaxation coefficients \(\lambda_i\) comprise the field updater parameters which affect how many timesteps are needed to converge or equilibrate an OpenFTS simulation. In some cases, a simulation requires specific \(\Delta t\) and \(\lambda_i\) in order to converge. The choice of \(\Delta t\) and \(\lambda_i\) is often nontrivial, resulting in many trials and errors to find optimal values.
To help with this, OpenFTS offers timestep optimization through the TimestepOptimizer class. TimestepOptimizer can efficiently find optimal \(\Delta t\) and \(\lambda_i\) by searching comprehensively through brute force methods or by searching efficiently through Bayesian optimized methods. Using TimestepOptimizer has the following general procedure
import openfts
fts = openfts.OpenFTS() # create an OpenFTS object
... # edit the OpenFTS object
optimizer = openfts.TimestepOptimizer(fts) # create a TimestepOptimizer object
optimizer.set_search_method(...) # set the search method
optimizer.run(...) # run TimestepOptimizer
optimizer.write_results() # write results to 'results.json'
optimizer.plot_results() # visualize results
where below is the comprehensive API to interface with TimestepOptimizer (generated via sphinx.autodoc).
- class openfts.TimestepOptimizer(fts)¶
TimestepOptimizer is a python object which finds the optimal timestep and relaxation coefficients (lambdas) for OpenFTS systems.
- Parameters:
fts – OpenFTS object specifying the system to be optimized.
- get_results()¶
Get the results of TimestepOptimizer. The results are formatted as the following list-containing tuple: (lambdas evaluated, their performance, their outcome, their operators).
- plot_results(plot_name='fig', show=False, step=None)¶
Plot a heatmap of TimestepOptimizer’s results. This is currently only implemented for systems having 2 fields.
- Parameters:
plot_name (str) – The file name for the plot.
show (bool) – Specifies whether to show the plot to the user.
step (int) – The evaluation step at which to plot results. If not given, all results are plotted. For example, plotting at evaluation step 5 plots the first 5 results.
- run(cleanup=True, dynamic_timestep=False, freq_restart=10, from_restart=False, nprocs=1, nreplicas=1, quick=False, verbosity=1)¶
Run TimestepOptimizer which evaluates the performance and outcome of each set of lambdas specified by the search method.
- Parameters:
cleanup (bool) – Specifies whether to clean up the current directory after running TimestepOptimizer. TimestepOptimizer often makes large directories for OpenFTS objects which do so as well.
dynamic_timestep (bool) – Specifies whether to run with dynamic timesteps. When enabled, TimestepOptimizer checks for whether the optimal lambdas are near the bounds of lambdas searched. If so, TimestepOptimizer changes the timestep of the OpenFTS object and reruns.
freq_restart (float) – Frequency to write restart files in minutes.
from_restart (bool) – Specifies whether to resume from a restart file.
nprocs (int) – Number of processes to use when running. When running on multiple CPUs, beneficial parallelization occurs when nprocs is greater than 2. When running on n GPUs, beneficial parallelization occurs there are n + 1 CPUs available.
nreplicas (int) – Number of replicas to run for each set of lambdas evaluated. The performance and outcome for a set of lambdas is equal to the replica average, but if a single replica diverges then the average is considered ‘diverged’.
quick (bool) – Specifies whether to run in ‘quick’ mode. Quick mode concurrently sets the maximum nsteps of the OpenFTS object to the nsteps of the best performing lambdas. This leads to quicker evaluations but less information about the non-optimal lambdas.
verbosity (int) – Level of detail to write to the standard output.
- set_search_method(method_name, **method_kwargs)¶
Set the search method for TimestepOptimizer.
- Parameters:
method_name (str) – The name of the search method to use. The currently available search methods are BayesOptSearch, GridSearch, and ManualSearch.
*method_kwargs –
Keyword arguments for the method being set. For example, setting GridSearch as the search method can be achieved by calling set_search_method(‘GridSearch’, min_lambda=0.05, max_lambda=50, points_per_lambda=60).
- write_results(write_operators=True)¶
Write the current results of TimestepOptimizer to a json file.
- Parameters:
write_operators (bool) – Specifies whether or not to write operators in the json file. For OpenFTS objects with driver type ‘ComplexLangevin’ and many outputted steps, writing operators can result in a large json file.
- class manual_search.ManualSearch(kwargs, nfields)¶
ManualSearch is a search method used by TimestepOptimizer. ManualSearch evaluates the performances and outcomes of lambdas specified by an input file.
The keyword arguments for ManualSearch are given below.
- Parameters:
lambdas (list/ndarray) – A python list or 2D numpy array specifying the lambdas to evaluate.
- class grid_search.GridSearch(kwargs, nfields)¶
GridSearch is a search method used by TimestepOptimizer. GridSearch evaluates the performances and outcomes of lambdas on a meshgrid.
The keyword arguments for GridSearch are given below.
- Parameters:
min_lambda (float) – The minimum lambda to evaluate.
max_lambda (float) – The maximum lambda to evaluate.
points_per_lambda (int) – The number of points to evaluate between and including the minimum and maximum lambda. Higher values increase the resolution of the search but also the computational cost.
- class bayes_opt_search.BayesOptSearch(kwargs, nfields)¶
BayesOptSearch is a search method used by TimestepOptimizer. BayesOptSearch uses Bayesian optimization to efficiently select lambdas to evaluate.
The keyword arguments for BayesOptSearch are given below.
- Parameters:
lambdas_per_gpr (int) – The number of lambdas evaluated per run of the gaussian process regressor (gpr). Lower values lead to a more efficient selection of lambdas to evaluate at the cost of a greater computational cost due to running the gpr more often. This parameter has a default value of 1.
min_lambda (float) – The minimum lambda to evaluate.
max_lambda (float) – The maximum lambda to evaluate.
nevaluations (int) – The number of lambdas to evaluate in total with BayesOptSearch.
init_type (str) – The method for choosing initial points. ‘SpaceFilling’ chooses the initial points in a stochastic space filling fashion. ‘eig_sqrt’ (square root of eigenvalue line initialization) and ‘eig’ (eigenvalue line initialization) do the same but restricted to their respective lines.
nevaluations_initial (int) – The number of lambdas to evaluate before running Bayesian optimization. These points are randomly chosen and help seed the algorithm. This parameter has a default value of 20.
points_per_lambda (int) – The number of points available to evaluate between and including the minimum and maximum lambda. Higher values increase the resolution of the search but also the computational cost.
xi (float) – Parameter which balances BayesOptSearch’s exploration (i.e. sampling points far away from known points) versus exploitation (i.e. sampling points around known points). Higher values of xi increase exploration, while lower values increase exploitation.
use_GridSearch_data (bool) – Set to true if GridSearch data exists.
lambdas (list/ndarray) – A python list or 2D numpy array specifying the lambdas to evaluate. If provided, BayesOptSearch chooses from these lambdas rather than the meshgrid parametrized by min_lambda, max_lambda, and points_per_lambda.
max_converged_ninit (int) – The maximum number of converged initial points. The default value is nevaluations_initial which disables this feature.
max_performance (int) – The highest performance allowed by BayesOptSearch. If some measured performance is higher than max_performance, the run is considered diverged. This feature is useful for ComplexLangevin systems where the maximum nsteps does not set a maximum performance.
- get_nevaluations()¶
Get the number of lambdas left to evaluate. For BayesOptSearch, this is set by the nevaluations keyword argument.
- plot_fun(fun, fun_name, plot_name)¶
Plot a heatmap of TimestepOptimizer’s results. This is currently only implemented for systems having 2 fields.
- Parameters:
plot_name (str) – The file name for the plot.