Timestep Optimization
The choice of timestep \(\Delta t\) and relaxation coefficients \(\lambda_i\) comprise the field updater parameters which affect how many timesteps are needed to converge or equilibrate an OpenFTS simulation. In some cases, a simulation requires specific \(\Delta t\) and \(\lambda_i\) in order to converge. The choice of \(\Delta t\) and \(\lambda_i\) is often nontrivial, resulting in many trials and errors to find optimal values.
To help with this, OpenFTS offers timestep optimization through the TimestepOptimizer class. TimestepOptimizer can efficiently find optimal \(\Delta t\) and \(\lambda_i\) by searching comprehensively through brute force methods or by searching efficiently through Bayesian optimized methods. Using TimestepOptimizer has the following general procedure
import openfts
fts = openfts.OpenFTS() # create an OpenFTS object
... # edit the OpenFTS object
optimizer = openfts.TimestepOptimizer(fts) # create a TimestepOptimizer object
optimizer.set_search_method(...) # set the search method
optimizer.run(...) # run TimestepOptimizer
optimizer.write_results() # write results to 'results.json'
optimizer.plot_results() # visualize results
where below is the comprehensive API to interface with TimestepOptimizer (generated via sphinx.autodoc).
- class openfts.TimestepOptimizer(fts)
TimestepOptimizer is a python object which finds the optimal timestep and relaxation coefficients (lambdas) for OpenFTS systems.
- Parameters:
fts – OpenFTS object specifying the system to be optimized.
- get_results()
Get the results of TimestepOptimizer. The results are formatted as the following list-containing tuple: (lambdas evaluated, their performance, their outcome, their operators).
- plot_results(show=False)
Plot a heatmap of TimestepOptimizer’s results. This is currently only implemented for systems having 2 fields.
- run(dynamic_timestep=False, freq_restart=10, from_restart=False, nprocs=1, nreplicas=1, quick=False, verbosity=1)
Run TimestepOptimizer which evaluates the performance and outcome of each set of lambdas specified by the search method.
- Parameters:
dynamic_timestep (bool) – Specifies whether to run with dynamic timesteps. When enabled, TimestepOptimizer checks for whether the optimal lambdas are near the bounds of lambdas searched. If so, TimestepOptimizer changes the timestep of the OpenFTS object and reruns.
freq_restart (float) – Frequency to write restart files in minutes.
from_restart (bool) – Specifies whether to resume from a restart file.
nprocs (int) – Number of processes to use when running. Beneficial parallelization occurs when nprocs is greater than 2 for OpenFTS object’s running on CPUs.
nreplicas (int) – Number of replicas to run for each set of lambdas evaluated. The performance and outcome for a set of lambdas is equal to the replica average, but if a single replica diverges then the average is considered ‘diverged’.
quick (bool) – Specifies whether to run in ‘quick’ mode. Quick mode concurrently sets the maximum nsteps of the OpenFTS object to the nsteps of the best performing lambdas. This leads to quicker evaluations but less information about the non-optimal lambdas.
verbosity (int) – Level of detail to write to the standard output.
- set_search_method(method_name, **method_kwargs)
Set the search method for TimestepOptimizer.
- Parameters:
method_name (str) – The name of the search method to use. The currently available search methods are BayesOptSearch, GridSearch, and ManualSearch.
*method_kwargs –
Keyword arguments for the method being set. For example, setting GridSearch as the search method can be achieved by calling set_search_method(‘GridSearch’, min_lambda=0.05, max_lambda=50, points_per_lambda=60).
- write_results(write_operators=True)
Write the current results of TimestepOptimizer to a json file.
- Parameters:
write_operators (bool) – Specifies whether or not to write operators in the json file. For OpenFTS objects with driver type ‘ComplexLangevin’ and many outputted steps, writing operators can result in a large json file.
- class manual_search.ManualSearch(kwargs, nfields)
ManualSearch is a search method used by TimestepOptimizer. ManualSearch evaluates the performances and outcomes of lambdas specified by an input file.
The keyword arguments for ManualSearch are given below.
- Parameters:
lambdas_file (str) – File path specifying the lambdas to evaluate. Each line of the file corresponds to a set of lambdas delimited by spaces.
- class grid_search.GridSearch(kwargs, nfields)
GridSearch is a search method used by TimestepOptimizer. GridSearch evaluates the performances and outcomes of lambdas on a meshgrid.
The keyword arguments for GridSearch are given below.
- Parameters:
min_lambda (float) – The minimum lambda to evaluate.
max_lambda (float) – The maximum lambda to evaluate.
points_per_lambda (int) – The number of points to evaluate between and including the minimum and maximum lambda. Higher values increase the resolution of the search but also the computational cost.
- class bayes_opt_search.BayesOptSearch(kwargs, nfields)
BayesOptSearch is a search method used by TimestepOptimizer. BayesOptSearch uses Bayesian optimization to efficiently select lambdas to evaluate.
The keyword arguments for BayesOptSearch are given below.
- Parameters:
lambdas_per_gpr (int) – The number of lambdas evaluated per run of the gaussian process regressor (gpr). Lower values lead to a more efficient selection of lambdas to evaluate at the cost of a greater computational cost due to running the gpr more often. This parameter has a default value of 1.
min_lambda (float) – The minimum lambda to evaluate.
max_lambda (float) – The maximum lambda to evaluate.
nevaluations (int) – The number of lambdas to evaluate in total with BayesOptSearch.
nevaluations_initial (int) – The number of lambdas to evaluate before running Bayesian optimization. These points are randomly chosen and help seed the algorithm. This parameter has a default value of 20.
points_per_lambda (int) – The number of points available to evaluate between and including the minimum and maximum lambda. Higher values increase the resolution of the search but also the computational cost.
- get_nevaluations()
Get the number of lambdas left to evaluate. For BayesOptSearch, this is set by the nevaluations keyword argument.