Using the config.toml File
The config.toml file is a configuration file that uses the TOML v0.5.0 file format. Administrators can customize various aspects of a Driverless AI (DAI) environment by editing the config.toml file before starting DAI.
备注
For information on configuration security, see Configuration Security.
Configuration Override Chain
The configuration engine reads and overrides variables in the following order:
Driverless AI defaults: These are stored in a Python
configmodule.config.toml- Place this file in a folder or mount it in a Docker container and specify the path in the “DRIVERLESS_AI_CONFIG_FILE” environment variable.Keystore file - Set the
keystore_fileparameter in the config.toml file or the environment variable “DRIVERLESS_AI_KEYSTORE_FILE” to point to a valid DAI keystore file generated using the h2oai.keystore tool. If an environment variable is set, the value in the config.toml forkeystore_fileis overridden.Environment variable - Configuration variables can also be provided as environment variables. They must have the prefix DRIVERLESS_AI_ followed by the variable name in all caps. For example, “authentication_method” can be provided as “DRIVERLESS_AI_AUTHENTICATION_METHOD”. Setting environment variables overrides values from the keystore file.
Copy the
config.tomlfile from inside the Docker image to your local filesystem.
# Make a config directory mkdir config # Copy the config.toml file to the new config directory. docker run --runtime=nvidia \ --pid=host \ --rm \ --init \ -u `id -u`:`id -g` \ -v `pwd`/config:/config \ --entrypoint bash \ h2oai/dai-ubi8-x86_64:2.3.2-cuda11.8.0.xx -c "cp /etc/dai/config.toml /config"
Edit the desired variables in the
config.tomlfile. Save your changes when you are done.Start DAI with the DRIVERLESS_AI_CONFIG_FILE environment variable. Ensure that this environment variable points to the location of the edited
config.tomlfile so that the software can locate the configuration file.
docker run --runtime=nvidia \ --pid=host \ --init \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -e DRIVERLESS_AI_CONFIG_FILE="/config/config.toml" \ -v `pwd`/config:/config \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:2.3.2-cuda11.8.0.xx
Native installs include DEBs, RPMs, and TAR SH installs.
Export the DAI
config.tomlfile or add it to~/.bashrc. For example:
export DRIVERLESS_AI_CONFIG_FILE=“/config/config.toml”
Edit the desired variables in the
config.tomlfile. Save your changes when you are done.Start DAI. Note that the command used to start DAI varies depending on your install type.
Sample config.toml File
The following is a copy of the standard config.toml file included with this version of DAI. The sections that follow describe some examples showing how to set different environment variables, data connectors, authentication methods, and notifications.
1
2##############################################################################
3# DRIVERLESS AI CONFIGURATION FILE
4#
5# Comments:
6# This file is authored in TOML (see https://github.com/toml-lang/toml)
7#
8# Config Override Chain
9# Configuration variables for Driverless AI can be provided in several ways,
10# the config engine reads and overrides variables in the following order
11#
12# 1. h2oai/config/config.toml
13# [internal not visible to users]
14#
15# 2. config.toml
16# [place file in a folder/mount file in docker container and provide path
17# in "DRIVERLESS_AI_CONFIG_FILE" environment variable]
18#
19# 3. Keystore file
20# [set keystore_file parameter in config.toml, or environment variable
21# "DRIVERLESS_AI_KEYSTORE_FILE" to point to a valid DAI keystore file
22# generated using h2oai.keystore tool
23#
24# 4. Environment variable
25# [configuration variables can also be provided as environment variables
26# they must have the prefix "DRIVERLESS_AI_" followed by
27# variable name in caps e.g "authentication_method" can be provided as
28# "DRIVERLESS_AI_AUTHENTICATION_METHOD"]
29##############################################################################
30
31# If the experiment is not done after this many minutes, stop feature engineering and model tuning as soon as possible and proceed with building the final modeling pipeline and deployment artifacts, independent of model score convergence or pre-determined number of iterations. Only active is not in reproducible mode. Depending on the data and experiment settings, overall experiment runtime can differ significantly from this setting.
32#max_runtime_minutes = 1440
33
34# if non-zero, then set max_runtime_minutes automatically to min(max_runtime_minutes, max(min_auto_runtime_minutes, runtime estimate)) when enable_preview_time_estimate is true, so that the preview performs a best estimate of the runtime. Set to zero to disable runtime estimate being used to constrain runtime of experiment.
35#min_auto_runtime_minutes = 60
36
37# Whether to tune max_runtime_minutes based upon final number of base models,so try to trigger start of final model in order to better ensure stop entire experiment before max_runtime_minutes.Note: If the time given is short enough that tuning models are reduced belowfinal model expectations, the final model may be shorter than expected leadingto an overall shorter experiment time.
38#max_runtime_minutes_smart = true
39
40# If the experiment is not done after this many minutes, push the abort button. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.
41#max_runtime_minutes_until_abort = 10080
42
43# If reproducbile is set, then experiment and all artifacts are reproducible, however then experiments may take arbitrarily long for a given choice of dials, features, and models.
44# Setting this to False allows the experiment to complete after a fixed time, with all aspects of the model and feature building are reproducible and seeded, but the overall experiment behavior will not necessarily be reproducible if later iterations would have been used in final model building.
45# This should set to True if every seeded experiment of exact same setup needs to generate the exact same final model, regardless of duration.
46#strict_reproducible_for_max_runtime = true
47
48# Uses model built on large number of experiments to estimate runtime. It can be inaccurate in cases that were not trained on.
49#enable_preview_time_estimate = true
50
51# Uses model built on large number of experiments to estimate mojo size. It can be inaccurate in cases that were not trained on.
52#enable_preview_mojo_size_estimate = true
53
54# Uses model built on large number of experiments to estimate max cpu memory. It can be inaccurate in cases that were not trained on.
55#enable_preview_cpu_memory_estimate = true
56
57#enable_preview_time_estimate_rough = false
58
59# If the experiment is not done by this time, push the abort button. Accepts time in format given by time_abort_format (defaults to %Y-%m-%d %H:%M:%S)assuming a time zone set by time_abort_timezone (defaults to UTC). One can also give integer seconds since 1970-01-01 00:00:00 UTC. Applies to time on a DAI worker that runs experiments. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.NOTE: If start new experiment with same parameters, restart, or refit, thisabsolute time will apply to such experiments or set of leaderboard experiments.
60#time_abort = ""
61
62# Any format is allowed as accepted by datetime.strptime.
63#time_abort_format = "%Y-%m-%d %H:%M:%S"
64
65# Any time zone in format accepted by datetime.strptime.
66#time_abort_timezone = "UTC"
67
68# Whether to delete all directories and files matching experiment pattern when call do_delete_model (True),
69# or whether to just delete directories (False). False can be used to preserve experiment logs that do
70# not take up much space.
71#
72#delete_model_dirs_and_files = true
73
74# Whether to delete all directories and files matching dataset pattern when call do_delete_dataset (True),
75# or whether to just delete directories (False). False can be used to preserve dataset logs that do
76# not take up much space.
77#
78#delete_data_dirs_and_files = true
79
80# # Recipe type
81# ## Recipes override any GUI settings
82# - **'auto'**: all models and features automatically determined by experiment settings, toml settings, and feature_engineering_effort
83# - **'compliant'** : like 'auto' except:
84# - *interpretability=10* (to avoid complexity, overrides GUI or python client chose for interpretability)
85# - *enable_glm='on'* (rest 'off', to avoid complexity and be compatible with algorithms supported by MLI)
86# - *fixed_ensemble_level=0*: Don't use any ensemble
87# - *feature_brain_level=0*(: No feature brain used (to ensure every restart is identical)
88# - *max_feature_interaction_depth=1*: interaction depth is set to 1 (no multi-feature interactions to avoid complexity)
89# - *target_transformer='identity'*: for regression (to avoid complexity)
90# - *check_distribution_shift_drop='off'*: Don't use distribution shift between train, valid, and test to drop features (bit risky without fine-tuning)
91# - **'monotonic_gbm'** : like 'auto' except:
92# - *monotonicity_constraints_interpretability_switch=1*: enable monotonicity constraints
93# - *self.config.monotonicity_constraints_correlation_threshold = 0.01*: see below
94# - *monotonicity_constraints_drop_low_correlation_features=true*: drop features that aren't correlated with target by at least 0.01 (specified by parameter above)
95# - *fixed_ensemble_level=0*: Don't use any ensemble (to avoid complexity)
96# - *included_models=['LightGBMModel']*
97# - *included_transformers=['OriginalTransformer']*: only original (numeric) features will be used
98# - *feature_brain_level=0*: No feature brain used (to ensure every restart is identical)
99# - *monotonicity_constraints_log_level='high'*
100# - *autodoc_pd_max_runtime=-1*: no timeout for PDP creation in AutoDoc
101# - **'kaggle'** : like 'auto' except:
102# - external validation set is concatenated with train set, with target marked as missing
103# - test set is concatenated with train set, with target marked as missing
104# - transformers that do not use the target are allowed to fit_transform across entire train + validation + test
105# - several config toml expert options open-up limits (e.g. more numerics are treated as categoricals)
106# - Note: If plentiful memory, can:
107# - choose kaggle mode and then change fixed_feature_interaction_depth to large negative number,
108# otherwise default number of features given to transformer is limited to 50 by default
109# - choose mutation_mode = "full", so even more types are transformations are done at once per transformer
110# - **'nlp_model'**: Only enables NLP models that process pure text
111# - **'nlp_transformer'**: Only enables NLP transformers that process pure text, while any model type is allowed
112# - **'image_model'**: Only enables Image models that process pure images
113# - **'image_transformer'**: Only enables Image transformers that process pure images, while any model type is allowed
114# - **'unsupervised'**: Only enables unsupervised transformers, models and scorers
115# - **'gpus_max'**: Maximize use of GPUs (e.g. use XGBoost, rapids, Optuna hyperparameter search, etc.)
116# - **'more_overfit_protection'**: Potentially improve overfit, esp. for small data, by disabling target encoding and making GA behave like final model for tree counts and learning rate
117# - **'feature_store_mojo'**: Creates a MOJO to be used as transformer in the H2O Feature Store, to augment data on a row-by-row level based on Driverless AI's feature engineering. Only includes transformers that don't depend on the target, since features like target encoding need to be created at model fitting time to avoid data leakage. And features like lags need to be created from the raw data, they can't be computed with a row-by-row MOJO transformer.
118# Each pipeline building recipe mode can be chosen, and then fine-tuned using each expert settings. Changing the
119# pipeline building recipe will reset all pipeline building recipe options back to default and then re-apply the
120# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of pipeline building
121# recipe rules.
122# If choose to do new/continued/refitted/retrained experiment from parent experiment, the recipe rules are not re-applied
123# and any fine-tuning is preserved. To reset recipe behavior, one can switch between 'auto' and the desired mode. This
124# way the new child experiment will use the default settings for the chosen recipe.
125#recipe = "auto"
126
127# Whether to treat model like UnsupervisedModel, so that one specifies each scorer, pretransformer, and transformer in expert panel like one would do for supervised experiments.
128# Otherwise (False), custom unsupervised models will assume the model itself specified these.
129# If the unsupervised model chosen has _included_transformers, _included_pretransformers, and _included_scorers selected, this should be set to False (default) else should be set to True.
130# Then if one wants the unsupervised model to only produce 1 gene-transformer, then the custom unsupervised model can have:
131# _ngenes_max = 1
132# _ngenes_max_by_layer = [1000, 1]
133# The 1000 for the pretransformer layer just means that layer can have any number of genes. Choose 1 if you expect single instance of the pretransformer to be all one needs, e.g. consumes input features fully and produces complete useful output features.
134#
135#custom_unsupervised_expert_mode = false
136
137# Whether to enable genetic algorithm for selection and hyper-parameter tuning of features and models.
138# - If disabled ('off'), will go directly to final pipeline training (using default feature engineering and feature selection).
139# - 'auto' is same as 'on' unless pure NLP or Image experiment.
140# - "Optuna": Uses DAI genetic algorithm for feature engineering, but model hyperparameters are tuned with Optuna.
141# - In the Optuna case, the scores shown in the iteration panel are the best score and trial scores.
142# - Optuna mode currently only uses Optuna for XGBoost, LightGBM, and CatBoost (custom recipe).
143# - If Pruner is enabled, as is default, Optuna mode disables mutations of eval_metric so pruning uses same metric across trials to compare properly.
144# Currently does not supported when pre_transformers or multi-layer pipeline used, which must go through at least one round of tuning or evolution.
145#
146#enable_genetic_algorithm = "auto"
147
148# How much effort to spend on feature engineering (-1...10)
149# Heuristic combination of various developer-level toml parameters
150# -1 : auto (5, except 1 for wide data in order to limit engineering)
151# 0 : keep only numeric features, only model tuning during evolution
152# 1 : keep only numeric features and frequency-encoded categoricals, only model tuning during evolution
153# 2 : Like #1 but instead just no Text features. Some feature tuning before evolution.
154# 3 : Like #5 but only tuning during evolution. Mixed tuning of features and model parameters.
155# 4 : Like #5, but slightly more focused on model tuning
156# 5 : Default. Balanced feature-model tuning
157# 6-7 : Like #5, but slightly more focused on feature engineering
158# 8 : Like #6-7, but even more focused on feature engineering with high feature generation rate, no feature dropping even if high interpretability
159# 9-10: Like #8, but no model tuning during feature evolution
160#
161#feature_engineering_effort = -1
162
163# Whether to enable train/valid and train/test distribution shift detection ('auto'/'on'/'off').
164# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
165# expert panel, and then only the models selected in recipe list will be used.
166#
167#check_distribution_shift = "auto"
168
169# Whether to enable train/test distribution shift detection ('auto'/'on'/'off') for final model transformed features.
170# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
171# expert panel, and then only the models selected in recipe list will be used.
172#
173#check_distribution_shift_transformed = "auto"
174
175# Whether to drop high-shift features ('auto'/'on'/'off'). Auto disables for time series.
176#check_distribution_shift_drop = "auto"
177
178# If distribution shift detection is enabled, drop features (except ID, text, date/datetime, time, weight) for
179# which shift AUC, GINI, or Spearman correlation is above this value
180# (e.g. AUC of a binary classifier that predicts whether given feature value
181# belongs to train or test data)
182#
183#drop_features_distribution_shift_threshold_auc = 0.999
184
185# Specify whether to check leakage for each feature (``on`` or ``off``).
186# If a fold column is used, this option checks leakage without using the fold column.
187# By default, LightGBM Model is used for leakage detection when possible, unless it is
188# turned off in the Model Expert Settings tab, in which case only the models selected with
189# the ``included_models`` option are used. Note that this option is always disabled for time
190# series experiments.
191#
192#check_leakage = "auto"
193
194# If leakage detection is enabled,
195# drop features for which AUC (R2 for regression), GINI,
196# or Spearman correlation is above this value.
197# If fold column present, features are not dropped,
198# because leakage test applies without fold column used.
199#
200#drop_features_leakage_threshold_auc = 0.999
201
202# Max number of rows x number of columns to trigger (stratified) sampling for leakage checks
203#
204#leakage_max_data_size = 10000000
205
206# Specify the maximum number of features to use and show in importance tables.
207# When Interpretability is set higher than 1,
208# transformed or original features with lower importance than the top max_features_importance features are always removed.
209# Feature importances of transformed or original features correspondingly will be pruned.
210# Higher values can lead to lower performance and larger disk space used for datasets with more than 100k columns.
211#
212#max_features_importance = 100000
213
214# Whether to create the Python scoring pipeline at the end of each experiment.
215#make_python_scoring_pipeline = "auto"
216
217# Whether to create the MOJO scoring pipeline at the end of each experiment. If set to "auto", will attempt to
218# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
219# transformers or custom recipes.
220#
221#make_mojo_scoring_pipeline = "auto"
222
223# Whether to create a C++ MOJO based Triton scoring pipeline at the end of each experiment. If set to "auto", will attempt to
224# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
225# transformers or custom recipes. Requires make_mojo_scoring_pipeline != "off".
226#
227#make_triton_scoring_pipeline = "off"
228
229# Whether to automatically deploy the model to the Triton inference server at the end of each experiment.
230# "remote" will deploy to the remote Triton inference server to location provided by triton_host_remote (and optionally, triton_model_repository_dir_remote).
231# "off" requires manual action (Deploy wizard or Python client or manual transfer of exported Triton directory from Deploy wizard) to deploy the model to Triton.
232#
233#auto_deploy_triton_scoring_pipeline = "off"
234
235# Test remote Triton deployments during creation of MOJO pipeline. Requires triton_host_remote to be configured and make_triton_scoring_pipeline to be enabled.
236#triton_mini_acceptance_test_remote = true
237
238#triton_client_timeout_testing = 300
239
240#test_triton_when_making_mojo_pipeline_only = false
241
242# Perform timing and accuracy benchmarks for Injected MOJO scoring vs Python scoring. This is for full scoring data, and can be slow. This also requires hard asserts. Doesn't force MOJO scoring by itself, so depends on mojo_for_predictions='on' if want full coverage.
243#mojo_for_predictions_benchmark = true
244
245# Fail hard if MOJO scoring is this many times slower than Python scoring.
246#mojo_for_predictions_benchmark_slower_than_python_threshold = 10
247
248# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if have at least this many rows. To reduce false positives.
249#mojo_for_predictions_benchmark_slower_than_python_min_rows = 100
250
251# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if takes at least this many seconds. To reduce false positives.
252#mojo_for_predictions_benchmark_slower_than_python_min_seconds = 2.0
253
254# Inject MOJO into fitted Python state if mini acceptance test passes, so can use C++ MOJO runtime when calling predict(enable_mojo=True, IS_SCORER=True, ...). Prerequisite for mojo_for_predictions='on' or 'auto'.
255#inject_mojo_for_predictions = true
256
257# Use MOJO for making fast low-latency predictions after experiment has finished (when applicable, for AutoDoc/Diagnostics/Predictions/MLI and standalone Python scoring via scorer.zip). For 'auto', only use MOJO if number of rows is equal or below mojo_for_predictions_max_rows. For larger frames, it can be faster to use the Python backend since used libraries are more likely already vectorized.
258#mojo_for_predictions = "auto"
259
260# For smaller datasets, the single-threaded but low latency C++ MOJO runtime can lead to significantly faster scoring times than the regular in-Driverless AI Python scoring environment. If enable_mojo=True is passed to the predict API, and the MOJO exists and is applicable, then use the MOJO runtime for datasets that have fewer or equal number of rows than this threshold. MLI/AutoDoc set enable_mojo=True by default, so this setting applies. This setting is only used if mojo_for_predictions is 'auto'.
261#mojo_for_predictions_max_rows = 10000
262
263# Batch size (in rows) for C++ MOJO predictions. Only when enable_mojo=True is passed to the predict API, and when the MOJO is applicable (e.g., fewer rows than mojo_for_predictions_max_rows). Larger values can lead to faster scoring, but use more memory.
264#mojo_for_predictions_batch_size = 100
265
266# Relative tolerance for mini MOJO acceptance test. If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
267#mojo_acceptance_test_rtol = 0.0
268
269# Absolute tolerance for mini MOJO acceptance test (for regression/Shapley, will be scaled by max(abs(preds)). If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
270#mojo_acceptance_test_atol = 0.0
271
272# Whether to attempt to reduce the size of the MOJO scoring pipeline. A smaller MOJO will also lead to
273# less memory footprint during scoring. It is achieved by reducing some other settings like interaction depth, and
274# hence can affect the predictive accuracy of the model.
275#
276#reduce_mojo_size = false
277
278# Whether to create the pipeline visualization at the end of each experiment.
279# Uses MOJO to show pipeline, input features, transformers, model, and outputs of model. MOJO-capable tree models show first tree.
280#make_pipeline_visualization = "auto"
281
282# Whether to create the python pipeline visualization at the end of each experiment.
283# Each feature and transformer includes a variable importance at end in brackets.
284# Only done when forced on, and artifacts as png files will appear in summary zip.
285# Each experiment has files per individual in final population:
286# 1) preprune_False_0.0 : Before final pruning, without any additional variable importance threshold pruning
287# 2) preprune_True_0.0 : Before final pruning, with additional variable importance <=0.0 pruning
288# 3) postprune_False_0.0 : After final pruning, without any additional variable importance threshold pruning
289# 4) postprune_True_0.0 : After final pruning, with additional variable importance <=0.0 pruning
290# 5) posttournament_False_0.0 : After final pruning and tournament, without any additional variable importance threshold pruning
291# 6) posttournament_True_0.0 : After final pruning and tournament, with additional variable importance <=0.0 pruning
292# 1-5 are done with 'on' while 'auto' only does 6 corresponding to the final post-pruned individuals.
293# Even post pruning, some features have zero importance, because only those genes that have value+variance in
294# variable importance of value=0.0 get pruned. GA can have many folds with positive variance
295# for a gene, and those are not removed in case they are useful features for final model.
296# If small mojo option is chosen (reduce_mojo_size True), then the variance of feature gain is ignored
297# for which genes and features are pruned as well as for what appears in the graph.
298#
299#make_python_pipeline_visualization = "auto"
300
301# Whether to create the experiment AutoDoc after end of experiment.
302#
303#make_autoreport = true
304
305#max_cols_make_autoreport_automatically = 1000
306
307#max_cols_make_pipeline_visualization_automatically = 5000
308
309# Pass environment variables from running Driverless AI instance to Python scoring pipeline for
310# deprecated models, when they are used to make predictions. Use with caution.
311# If config.toml overrides are set by env vars, and they differ from what the experiment's env
312# looked like when it was trained, then unexpected consequences can occur. Enable this only to "
313# override certain well-controlled settings like the port for H2O-3 custom recipe server.
314#
315#pass_env_to_deprecated_python_scoring = false
316
317#transformer_description_line_length = -1
318
319# Whether to measure the MOJO scoring latency at the time of MOJO creation.
320#benchmark_mojo_latency = "auto"
321
322# Max size of pipeline.mojo file (in MB) for automatic mode of MOJO scoring latency measurement
323#benchmark_mojo_latency_auto_size_limit = 2048
324
325# If MOJO creation times out at end of experiment, can still make MOJO from the GUI or from the R/Py clients (timeout doesn't apply there).
326#mojo_building_timeout = 1800.0
327
328# If MOJO visualization creation times out at end of experiment, MOJO is still created if possible within the time limit specified by mojo_building_timeout.
329#mojo_vis_building_timeout = 600.0
330
331# If MOJO creation is too slow, increase this value. Higher values can finish faster, but use more memory.
332# If MOJO creation fails due to an out-of-memory error, reduce this value to 1.
333# Set to -1 for all physical cores.
334#
335#mojo_building_parallelism = -1
336
337# Size in bytes that all pickled and compressed base models have to satisfy to use parallel MOJO building.
338# For large base models, parallel MOJO building can use too much memory.
339# Only used if final_fitted_model_per_model_fold_files is true.
340#
341#mojo_building_parallelism_base_model_size_limit = 100000000
342
343# Whether to show model and pipeline sizes in logs.
344# If 'auto', then not done if more than 10 base models+folds, because expect not concerned with size.
345#show_pipeline_sizes = "auto"
346
347# safe: assume might be running another experiment on same node
348# moderate: assume not running any other experiments or tasks on same node, but still only use physical core count
349# max: assume not running anything else on node at all except the experiment
350# If multinode is enabled, this option has no effect, unless worker_remote_processors=1 when it will still be applied.
351# Each exclusive mode can be chosen, and then fine-tuned using each expert settings. Changing the
352# exclusive mode will reset all exclusive mode related options back to default and then re-apply the
353# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of exclusive mode rules.
354# If choose to do new/continued/refitted/retrained experiment from parent experiment, all the mode rules are not re-applied
355# and any fine-tuning is preserved. To reset mode behavior, one can switch between 'safe' and the desired mode. This
356# way the new child experiment will use the default system resources for the chosen mode.
357#
358#exclusive_mode = "safe"
359
360# Maximum number of workers for Driverless AI server pool (only 1 needed currently)
361#max_workers = 1
362
363# Max number of CPU cores to use for the whole system. Set to <= 0 to use all (physical) cores.
364# If the number of ``worker_remote_processors`` is set to a value >= 3, the number of cores will be reduced
365# by the ratio (``worker_remote_processors_max_threads_reduction_factor`` * ``worker_remote_processors``)
366# to avoid overloading the system when too many remote tasks are processed at once.
367# One can also set environment variable 'OMP_NUM_THREADS' to number of cores to use for OpenMP
368# (e.g., in bash: 'export OMP_NUM_THREADS=32' and 'export OPENBLAS_NUM_THREADS=32').
369#
370#max_cores = 0
371
372# Max number of CPU cores to use across all of DAI experiments and tasks.
373# -1 is all available, with stall_subprocess_submission_dai_fork_threshold_count=0 means restricted to core count.
374#
375#max_cores_dai = -1
376
377# Number of virtual cores per physical core (0: auto mode, >=1 use that integer value). If >=1, the reported physical cores in logs will match the virtual cores divided by this value.
378#virtual_cores_per_physical_core = 0
379
380# Mininum number of virtual cores per physical core. Only applies if virtual cores != physical cores. Can help situations like Intel i9 13900 with 24 physical cores and only 32 virtual cores. So better to limit physical cores to 16.
381#min_virtual_cores_per_physical_core_if_unequal = 2
382
383# Number of physical cores to assume are present (0: auto, >=1 use that integer value).
384# If for some reason DAI does not automatically figure out physical cores correctly,
385# one can override with this value. Some systems, especially virtualized, do not always provide
386# correct information about the virtual cores, physical cores, sockets, etc.
387#override_physical_cores = 0
388
389# Number of virtual cores to assume are present (0: auto, >=1 use that integer value).
390# If for some reason DAI does not automatically figure out virtual cores correctly,
391# or only a portion of the system is to be used, one can override with this value.
392# Some systems, especially virtualized, do not always provide
393# correct information about the virtual cores, physical cores, sockets, etc.
394#override_virtual_cores = 0
395
396# Whether to treat data as small recipe in terms of work, by spreading many small tasks across many cores instead of forcing GPUs, for models that support it via static var _use_single_core_if_many. 'auto' looks at _use_single_core_if_many for models and data size, 'on' forces, 'off' disables.
397#small_data_recipe_work = "auto"
398
399# Stall submission of tasks if total DAI fork count exceeds count (-1 to disable, 0 for automatic of max_cores_dai)
400#stall_subprocess_submission_dai_fork_threshold_count = 0
401
402# Stall submission of tasks if system memory available is less than this threshold in percent (set to 0 to disable).
403# Above this threshold, the number of workers in any pool of workers is linearly reduced down to 1 once hitting this threshold.
404#
405#stall_subprocess_submission_mem_threshold_pct = 2
406
407# Whether to set automatic number of cores by physical (True) or logical (False) count.
408# Using all logical cores can lead to poor performance due to cache thrashing.
409#
410#max_cores_by_physical = true
411
412# Absolute limit to core count
413#max_cores_limit = 200
414
415# Control maximum number of cores to use for a model's fit call (0 = all physical cores >= 1 that count). See also tensorflow_model_max_cores to further limit TensorFlow main models.
416#max_fit_cores = 10
417
418# Control maximum number of cores to use for a scoring across all chosen scorers (0 = auto)
419#parallel_score_max_workers = 0
420
421# Whether to use full multinode distributed cluster (True) or single-node dask (False).
422# In some cases, using entire cluster can be inefficient. E.g. several DGX nodes can be more efficient
423# if used one DGX at a time for medium-sized data.
424#
425#use_dask_cluster = true
426
427# Control maximum number of cores to use for a model's predict call (0 = all physical cores >= 1 that count)
428#max_predict_cores = 0
429
430# Factor by which to reduce physical cores, to use for post-model experiment tasks like autoreport, MLI, etc.
431#max_predict_cores_in_dai_reduce_factor = 4
432
433# Maximum number of cores to use for post-model experiment tasks like autoreport, MLI, etc.
434#max_max_predict_cores_in_dai = 10
435
436# Control maximum number of cores to use for a model's transform and predict call when doing operations inside DAI-MLI GUI and R/Py client.
437# The main experiment and other tasks like MLI and autoreport have separate queues. The main experiments have run at most worker_remote_processors tasks (limited by cores if auto mode),
438# while other tasks run at most worker_local_processors (limited by cores if auto mode) tasks at the same time,
439# so many small tasks can add up. To prevent overloading the system, the defaults are conservative. However, if most of the activity involves autoreport or MLI, and no model experiments
440# are running, it may be safe to increase this value to something larger than 4.
441# -1 : Auto mode. Up to physical cores divided by 4, up to maximum of 10.
442# 0 : all physical cores
443# >= 1: that count).
444#
445#max_predict_cores_in_dai = -1
446
447# Control number of workers used in CPU mode for tuning (0 = socket count -1 = all physical cores >= 1 that count). More workers will be more parallel but models learn less from each other.
448#batch_cpu_tuning_max_workers = 0
449
450# Control number of workers used in CPU mode for training (0 = socket count -1 = all physical cores >= 1 that count)
451#cpu_max_workers = 0
452
453# Expected maximum number of forks, used to ensure datatable doesn't overload system. For actual use beyond this value, system will start to have slow-down issues
454#assumed_simultaneous_dt_forks_munging = 3
455
456# Expected maximum number of forks by computing statistics during ingestion, used to ensure datatable doesn't overload system
457#assumed_simultaneous_dt_forks_stats_openblas = 1
458
459# Maximum of threads for datatable for munging
460#max_max_dt_threads_munging = 4
461
462# Expected maximum of threads for datatable no matter if many more cores
463#max_max_dt_threads_stats_openblas = 8
464
465# Maximum of threads for datatable for reading/writing files
466#max_max_dt_threads_readwrite = 4
467
468# Maximum parallel workers for final model building.
469# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
470# Can be required if some transformer or model uses more than the expected amount of memory.
471# Ways to reduce final model building memory usage, e.g. set one or more of these and retrain final model:
472# 1) Increase munging_memory_overhead_factor to 10
473# 2) Increase final_munging_memory_reduction_factor to 10
474# 3) Lower max_workers_final_munging to 1
475# 4) Lower max_workers_final_base_models to 1
476# 5) Lower max_cores to, e.g., 1/2 or 1/4 of physical cores.
477#max_workers_final_base_models = 0
478
479# Maximum parallel workers for final per-model munging.
480# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
481# Can be required if some transformer uses more than the expected amount of memory.
482#max_workers_final_munging = 0
483
484# Minimum number of threads for datatable (and OpenMP) during data munging (per process).
485# datatable is the main data munging tool used within Driverless ai (source :
486# https://github.com/h2oai/datatable)
487#
488#min_dt_threads_munging = 1
489
490# Like min_datatable (and OpenMP)_threads_munging but for final pipeline munging
491#min_dt_threads_final_munging = 1
492
493# Maximum number of threads for datatable during data munging (per process) (0 = all, -1 = auto).
494# If multiple forks, threads are distributed across forks.
495#max_dt_threads_munging = -1
496
497# Maximum number of threads for datatable during data reading and writing (per process) (0 = all, -1 = auto).
498# If multiple forks, threads are distributed across forks.
499#max_dt_threads_readwrite = -1
500
501# Maximum number of threads for datatable stats and openblas (per process) (0 = all, -1 = auto).
502# If multiple forks, threads are distributed across forks.
503#max_dt_threads_stats_openblas = -1
504
505# Maximum number of threads for datatable during TS properties preview panel computations).
506#max_dt_threads_do_timeseries_split_suggestion = 1
507
508# Number of GPUs to use per experiment for training task. Set to -1 for all GPUs.
509# An experiment will generate many different models.
510# Currently num_gpus_per_experiment!=-1 disables GPU locking, so is only recommended for
511# single experiments and single users.
512# Ignored if GPUs disabled or no GPUs on system.
513# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
514# In multinode context when using dask, this refers to the per-node value.
515# For ImageAutoModel, this refers to the total number of GPUs used for that entire model type,
516# since there is only one model type for the entire experiment.
517# E.g. if have 4 GPUs and want 2 ImageAuto experiments to run on 2 GPUs each, can set
518# num_gpus_per_experiment to 2 for each experiment, and each of the 4 GPUs will be used one at a time
519# by the 2 experiments each using 2 GPUs only.
520#
521#num_gpus_per_experiment = -1
522
523# Number of CPU cores per GPU. Limits number of GPUs in order to have sufficient cores per GPU.
524# Set to -1 to disable, -2 for auto mode.
525# In auto mode, if lightgbm_use_gpu is 'auto' or 'off', then min_num_cores_per_gpu=1, else min_num_cores_per_gpu=2, due to lightgbm requiring more cores even when using GPUs.
526#min_num_cores_per_gpu = -2
527
528# Number of GPUs to use per model training task. Set to -1 for all GPUs.
529# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model.
530# Only applicable currently to image auto pipeline building recipe or Dask models with more than one GPU or more than one node.
531# Ignored if GPUs disabled or no GPUs on system.
532# For ImageAutoModel, the maximum of num_gpus_per_model and num_gpus_per_experiment (all GPUs if -1) is taken.
533# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
534# In multinode context when using Dask, this refers to the per-node value.
535#
536#num_gpus_per_model = 1
537
538# Number of GPUs to use for predict for models and transform for transformers when running outside of fit/fit_transform.
539# -1 means all, 0 means no GPUs, >1 means that many GPUs up to visible limit.
540# If predict/transform are called in same process as fit/fit_transform, number of GPUs will match,
541# while new processes will use this count for number of GPUs for applicable models/transformers.
542# Exception: TensorFlow, PyTorch models/transformers, and RAPIDS predict on GPU always if GPUs exist.
543# RAPIDS requires python scoring package be used also on GPUs.
544# In multinode context when using Dask, this refers to the per-node value.
545#
546#num_gpus_for_prediction = 0
547
548# Which gpu_id to start with
549# -1 : auto-mode. E.g. 2 experiments can each set num_gpus_per_experiment to 2 and use 4 GPUs
550# If using CUDA_VISIBLE_DEVICES=... to control GPUs (preferred method), gpu_id=0 is the
551# first in that restricted list of devices.
552# E.g. if CUDA_VISIBLE_DEVICES='4,5' then gpu_id_start=0 will refer to the
553# device #4.
554# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 2 GPUs:
555# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=0
556# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=1
557# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 8 GPUs:
558# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=0
559# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=4
560# E.g. Like just above, but now run on all 4 GPUs/model
561# Experiment#1: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=0
562# Experiment#2: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=4
563# If num_gpus_per_model!=1, global GPU locking is disabled
564# (because underlying algorithms don't support arbitrary gpu ids, only sequential ids),
565# so must setup above correctly to avoid overlap across all experiments by all users
566# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
567# Note that GPU selection does not wrap, so gpu_id_start + num_gpus_per_model must be less than number of visibile GPUs
568#
569#gpu_id_start = -1
570
571# Whether to reduce features until model does not fail.
572# Currently for non-dask XGBoost models (i.e. GLMModel, XGBoostGBMModel, XGBoostDartModel, XGBoostRFModel),
573# during normal fit or when using Optuna.
574# Primarily useful for GPU OOM.
575# If XGBoost runs out of GPU memory, this is detected, and
576# (regardless of setting of skip_model_failures),
577# we perform feature selection using XGBoost on subsets of features.
578# The dataset is progressively reduced by factor of 2 with more models to cover all features.
579# This splitting continues until no failure occurs.
580# Then all sub-models are used to estimate variable importance by absolute information gain,
581# in order to decide which features to include.
582# Finally, a single model with the most important features
583# is built using the feature count that did not lead to OOM.
584# For 'auto', this option is set to 'off' when reproducible experiment is enabled,
585# because the condition of running OOM can change for same experiment seed.
586# Reduction is only done on features and not on rows for the feature selection step.
587#
588#allow_reduce_features_when_failure = "auto"
589
590# With allow_reduce_features_when_failure, this controls how many repeats of sub-models
591# used for feature selection. A single repeat only has each sub-model
592# consider a single sub-set of features, while repeats shuffle which
593# features are considered allowing more chance to find important interactions.
594# More repeats can lead to higher accuracy.
595# The cost of this option is proportional to the repeat count.
596#
597#reduce_repeats_when_failure = 1
598
599# With allow_reduce_features_when_failure, this controls the fraction of features
600# treated as an anchor that are fixed for all sub-models.
601# Each repeat gets new anchors.
602# For tuning and evolution, the probability depends
603# upon any prior importance (if present) from other individuals,
604# while final model uses uniform probability for anchor features.
605#
606#fraction_anchor_reduce_features_when_failure = 0.1
607
608# Error strings from XGBoost that are used to trigger re-fit on reduced sub-models.
609# See allow_reduce_features_when_failure.
610#
611#xgboost_reduce_on_errors_list = "['Memory allocation error on worker', 'out of memory', 'XGBDefaultDeviceAllocatorImpl', 'invalid configuration argument', 'Requested memory']"
612
613# Error strings from LightGBM that are used to trigger re-fit on reduced sub-models.
614# See allow_reduce_features_when_failure.
615#
616#lightgbm_reduce_on_errors_list = "['Out of Host Memory']"
617
618# LightGBM does not significantly benefit from GPUs, unlike other tools like XGBoost or Bert/Image Models.
619# Each experiment will try to use all GPUs, and on systems with many cores and GPUs,
620# this leads to many experiments running at once, all trying to lock the GPU for use,
621# leaving the cores heavily under-utilized. So by default, DAI always uses CPU for LightGBM, unless 'on' is specified.
622#lightgbm_use_gpu = "auto"
623
624# Kaggle username for automatic submission and scoring of test set predictions.
625# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
626#
627#kaggle_username = ""
628
629# Kaggle key for automatic submission and scoring of test set predictions.
630# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
631#
632#kaggle_key = ""
633
634# Max. number of seconds to wait for Kaggle API call to return scores for given predictions
635#kaggle_timeout = 120
636
637#kaggle_keep_submission = false
638
639# If provided, can extend the list to arbitrary and potentially future Kaggle competitions to make
640# submissions for. Only used if kaggle_key and kaggle_username are provided.
641# Provide a quoted comma-separated list of tuples (target column name, number of test rows, competition, metric) like this:
642# kaggle_competitions='("target", 200000, "santander-customer-transaction-prediction", "AUC"), ("TARGET", 75818, "santander-customer-satisfaction", "AUC")'
643#
644#kaggle_competitions = ""
645
646# Period (in seconds) of ping by Driverless AI server to each experiment
647# (in order to get logger info like disk space and memory usage).
648# 0 means don't print anything.
649#ping_period = 60
650
651# Whether to enable ping of system status during DAI experiments.
652#ping_autodl = true
653
654# Minimum amount of disk space in GB needed to run experiments.
655# Experiments will fail if this limit is crossed.
656# This limit exists because Driverless AI needs to generate data for model training
657# feature engineering, documentation and other such processes.
658#disk_limit_gb = 5
659
660# Minimum amount of disk space in GB needed to before stall forking of new processes during an experiment.
661#stall_disk_limit_gb = 1
662
663# Minimum amount of system memory in GB needed to start experiments.
664# Similarly with disk space, a certain amount of system memory is needed to run some basic
665# operations.
666#memory_limit_gb = 5
667
668# Minimum number of rows needed to run experiments (values lower than 100 might not work).
669# A minimum threshold is set to ensure there is enough data to create a statistically
670# reliable model and avoid other small-data related failures.
671#
672#min_num_rows = 100
673
674# Minimum required number of rows (in the training data) for each class label for classification problems.
675#min_rows_per_class = 5
676
677# Minimum required number of rows for each split when generating validation samples.
678#min_rows_per_split = 5
679
680# Level of reproducibility desired (for same data and same inputs).
681# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
682# Supported levels are:
683# reproducibility_level = 1 for same experiment results as long as same O/S, same CPU(s) and same GPU(s)
684# reproducibility_level = 2 for same experiment results as long as same O/S, same CPU architecture and same GPU architecture
685# reproducibility_level = 3 for same experiment results as long as same O/S, same CPU architecture, not using GPUs
686# reproducibility_level = 4 for same experiment results as long as same O/S, (best effort)
687#
688#reproducibility_level = 1
689
690# Seed for random number generator to make experiments reproducible, to a certain reproducibility level (see above).
691# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
692#
693#seed = 1234
694
695# The list of values that should be interpreted as missing values during data import.
696# This applies to both numeric and string columns. Note that the dataset must be reloaded after applying changes to this config via the expert settings.
697# Also note that 'nan' is always interpreted as a missing value for numeric columns.
698#missing_values = "['', '?', 'None', 'nan', 'NA', 'N/A', 'unknown', 'inf', '-inf', '1.7976931348623157e+308', '-1.7976931348623157e+308']"
699
700# Whether to impute (to mean) for GLM on training data.
701#glm_nan_impute_training_data = false
702
703# Whether to impute (to mean) for GLM on validation data.
704#glm_nan_impute_validation_data = false
705
706# Whether to impute (to mean) for GLM on prediction data (required for consistency with MOJO).
707#glm_nan_impute_prediction_data = true
708
709# [DEPRECATED] For tensorflow, what numerical value to give to missing values, where numeric values are standardized.
710# So 0 is center of distribution, and if Normal distribution then +-5 is 5 standard deviations away from the center.
711# In many cases, an out of bounds value is a good way to represent missings, but in some cases the mean (0) may be better.
712#tf_nan_impute_value = -5
713
714# Internal threshold for number of rows x number of columns to trigger certain statistical
715# techniques (small data recipe like including one hot encoding for all model types, and smaller learning rate)
716# to increase model accuracy
717#statistical_threshold_data_size_small = 100000
718
719# Internal threshold for number of rows x number of columns to trigger certain statistical
720# techniques (fewer genes created, removal of high max_depth for tree models, etc.) that can speed up modeling.
721# Also controls maximum rows used in training final model,
722# by sampling statistical_threshold_data_size_large / columns number of rows
723#statistical_threshold_data_size_large = 500000000
724
725# Internal threshold for number of rows x number of columns to trigger sampling for auxiliary data uses,
726# like imbalanced data set detection and bootstrap scoring sample size and iterations
727#aux_threshold_data_size_large = 10000000
728
729# Internal threshold for set-based method for sampling without replacement.
730# Can be 10x faster than np_random_choice internal optimized method, and
731# up to 30x faster than np.random.choice to sample 250k rows from 1B rows etc.
732#set_method_sampling_row_limit = 5000000
733
734# Internal threshold for number of rows x number of columns to trigger certain changes in performance
735# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
736# (fewer threads if lower than small value) to avoid excess forking of tasks
737#performance_threshold_data_size_small = 100000
738
739# Internal threshold for number of rows x number of columns to trigger certain changes in performance
740# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
741# (fewer threads if lower than small value) to avoid excess forking of tasks
742#performance_threshold_data_size_large = 100000000
743
744# Threshold for number of rows x number of columns to trigger GPU to be default for models like XGBoost GBM.
745#gpu_default_threshold_data_size_large = 1000000
746
747# Maximum fraction of mismatched columns to allow between train and either valid or test. Beyond this value the experiment will fail with invalid data error.
748#max_relative_cols_mismatch_allowed = 0.5
749
750# Enable various rules to handle wide (Num. columns > Num. rows) datasets ('auto'/'on'/'off'). Setting on forces rules to be enabled regardless of columns.
751#enable_wide_rules = "auto"
752
753# If columns > wide_factor * rows, then enable wide rules if auto. For columns > rows, random forest is always enabled.
754#wide_factor = 5.0
755
756# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
757#max_cols = 10000000
758
759# Largest number of rows to use for column stats, otherwise sample randomly
760#max_rows_col_stats = 1000000
761
762# Largest number of rows to use for cv in cv for target encoding when doing gini scoring test
763#max_rows_cv_in_cv_gini = 100000
764
765# Largest number of rows to use for constant model fit, otherwise sample randomly
766#max_rows_constant_model = 1000000
767
768# Largest number of rows to use for final ensemble base model fold cores, otherwise sample randomly
769#max_rows_final_ensemble_base_model_fold_scores = 1000000
770
771# Largest number of rows to use for final ensemble blender for regression and binary (scaled down linearly by number of classes for multiclass for >= 10 classes), otherwise sample randomly.
772#max_rows_final_blender = 1000000
773
774# Smallest number of rows (or number of rows if less than this) to use for final ensemble blender.
775#min_rows_final_blender = 10000
776
777# Largest number of rows to use for final training score (no holdout), otherwise sample randomly
778#max_rows_final_train_score = 5000000
779
780# Largest number of rows to use for final ROC, lift-gains, confusion matrix, residual, and actual vs. predicted. Otherwise sample randomly
781#max_rows_final_roccmconf = 1000000
782
783# Largest number of rows to use for final holdout scores, otherwise sample randomly
784#max_rows_final_holdout_score = 5000000
785
786# Largest number of rows to use for final holdout bootstrap scores, otherwise sample randomly
787#max_rows_final_holdout_bootstrap_score = 1000000
788
789# Whether to obtain permutation feature importance on original features for reporting in logs and summary zip file
790# (as files with pattern fs_*.json or fs_*.tab.txt).
791# This computes feature importance on a single un-tuned model
792# (typically LightGBM with pre-defined un-tuned hyperparameters)
793# and simple set of features (encoding typically is frequency encoding or target encoding).
794# Features with low importance are automatically dropped if there are many original features,
795# or a model with feature selection by permutation importance is created if interpretability is high enough in order to see if it gives a better score.
796# One can manually drop low importance features, but this can be risky as transformers or hyperparameters might recover
797# their usefulness.
798# Permutation importance is obtained by:
799# 1) Transforming categoricals to frequency or target encoding features.
800# 2) Fitting that model on many folds, different data sizes, and slightly varying hyperparameters.
801# 3) Predicting on that model for each feature where each feature has its data shuffled.
802# 4) Computing the score on each shuffled prediction.
803# 5) Computing the difference between the unshuffled score and the shuffled score to arrive at a delta score
804# 6) The delta score becomes the variable importance once normalized by the maximum.
805# Positive delta scores indicate the feature helped the model score,
806# while negative delta scores indicate the feature hurt the model score.
807# The normalized scores are stored in the fs_normalized_* files in the summary zip.
808# The unnormalized scores (actual delta scores) are stored in the fs_unnormalized_* files in the summary zip.
809# AutoDoc has a similar functionality of providing permutation importance on original features,
810# where that takes the specific final model of an experiment and runs training data set through permutation importance to get original importance,
811# so shuffling of original features is performed and the full pipeline is computed in each shuffled set of original features.
812#
813#orig_features_fs_report = false
814
815# Maximum number of rows when doing permutation feature importance, reduced by (stratified) random sampling.
816#
817#max_rows_fs = 500000
818
819#max_rows_leak = 100000
820
821# How many workers to use for feature selection by permutation for predict phase.
822# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
823#
824#max_workers_fs = 0
825
826# How many workers to use for shift and leakage checks if using LightGBM on CPU.
827# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
828#
829#max_workers_shift_leak = 0
830
831# Maximum number of columns selected out of original set of original columns, using feature selection.
832# The selection is based upon how well target encoding (or frequency encoding if not available) on categoricals and numerics treated as categoricals.
833# This is useful to reduce the final model complexity. First the best
834# [max_orig_cols_selected] are found through feature selection methods and then
835# these features are used in feature evolution (to derive other features) and in modelling.
836#
837#max_orig_cols_selected = 10000000
838
839# Maximum number of numeric columns selected, above which will do feature selection
840# same max_orig_cols_selected but for numeric columns.
841#max_orig_numeric_cols_selected = 10000000
842
843#max_orig_nonnumeric_cols_selected_default = 300
844
845# Maximum number of non-numeric columns selected, above which will do feature selection on all features. Same as max_orig_numeric_cols_selected but for categorical columns.
846# If set to -1, then auto mode which uses max_orig_nonnumeric_cols_selected_default, but then for small data can be increased up to 10x larger.
847#
848#max_orig_nonnumeric_cols_selected = -1
849
850# The factor times max_orig_cols_selected, by which column selection is based upon no target encoding and no treating numerical as categorical
851# in order to limit performance cost of feature engineering
852#max_orig_cols_selected_simple_factor = 2
853
854# Like max_orig_cols_selected, but columns above which add special individual with original columns reduced.
855#
856#fs_orig_cols_selected = 10000000
857
858# Like max_orig_numeric_cols_selected, but applicable to special individual with original columns reduced.
859# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
860#
861#fs_orig_numeric_cols_selected = 10000000
862
863# Like max_orig_nonnumeric_cols_selected, but applicable to special individual with original columns reduced.
864# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
865#
866#fs_orig_nonnumeric_cols_selected = 200
867
868# Like max_orig_cols_selected_simple_factor, but applicable to special individual with original columns reduced.
869#fs_orig_cols_selected_simple_factor = 2
870
871#predict_shuffle_inside_model = true
872
873#use_native_cats_for_lgbm_fs = true
874
875#orig_stddev_max_cols = 1000
876
877# Maximum allowed fraction of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
878#max_relative_cardinality = 0.95
879
880# Maximum allowed number of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
881#max_absolute_cardinality = 1000000
882
883# Whether to treat some numerical features as categorical.
884# For instance, sometimes an integer column may not represent a numerical feature but
885# represent different numerical codes instead.
886# Very restrictive to disable, since then even columns with few categorical levels that happen to be numerical
887# in value will not be encoded like a categorical.
888#
889#num_as_cat = true
890
891# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
892#max_int_as_cat_uniques = 50
893
894# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only). Applies to integer or real numerical feature that violates Benford's law, and so is ID-like but not entirely an ID.
895#max_int_as_cat_uniques_if_not_benford = 10000
896
897# When the fraction of non-numeric (and non-missing) values is less or equal than this value, consider the
898# column numeric. Can help with minor data quality issues for experimentation, > 0 is not recommended for production,
899# since type inconsistencies can occur. Note: Replaces non-numeric values with missing values
900# at start of experiment, so some information is lost, but column is now treated as numeric, which can help.
901# If < 0, then disabled.
902# If == 0, then if number of rows <= max_rows_col_stats, then convert any column of strings of numbers to numeric type.
903#
904#max_fraction_invalid_numeric = 0.0
905
906# Number of folds for models used during the feature engineering process.
907# Increasing this will put a lower fraction of data into validation and more into training
908# (e.g., num_folds=3 means 67%/33% training/validation splits).
909# Actual value will vary for small or big data cases.
910#
911#num_folds = 3
912
913#fold_balancing_repeats_times_rows = 100000000.0
914
915#max_fold_balancing_repeats = 10
916
917#fixed_split_seed = 0
918
919#show_fold_stats = true
920
921# For multiclass problems only. Whether to allow different sets of target classes across (cross-)validation
922# fold splits. Especially important when passing a fold column that isn't balanced w.r.t class distribution.
923#
924#allow_different_classes_across_fold_splits = true
925
926# Accuracy setting equal and above which enables full cross-validation (multiple folds) during feature evolution
927# as opposed to only a single holdout split (e.g. 2/3 train and 1/3 validation holdout)
928#
929#full_cv_accuracy_switch = 9
930
931# Accuracy setting equal and above which enables stacked ensemble as final model.
932# Stacking commences at the end of the feature evolution process..
933# It quite often leads to better model performance, but it does increase the complexity
934# and execution time of the final model.
935#
936#ensemble_accuracy_switch = 5
937
938# Number of fold splits to use for ensemble_level >= 2.
939# The ensemble modelling may require predictions to be made on out-of-fold samples
940# hence the data needs to be split on different folds to generate these predictions.
941# Less folds (like 2 or 3) normally create more stable models, but may be less accurate
942# More folds can get to higher accuracy at the expense of more time, but the performance
943# may be less stable when the training data is not enough (i.e. higher chance of overfitting).
944# Actual value will vary for small or big data cases.
945#
946#num_ensemble_folds = 4
947
948# Includes pickles of (train_idx, valid_idx) tuples (numpy row indices for original training data)
949# for all internal validation folds in the experiment summary zip. For debugging.
950#
951#save_validation_splits = false
952
953# Number of repeats for each fold for all validation
954# (modified slightly for small or big data cases)
955#
956#fold_reps = 1
957
958#max_num_classes_hard_limit = 10000
959
960# Maximum number of classes to allow for a classification problem.
961# High number of classes may make certain processes of Driverless AI time-consuming.
962# Memory requirements also increase with higher number of classes
963#
964#max_num_classes = 1000
965
966# Maximum number of classes to compute ROC and CM for,
967# beyond which roc_reduce_type choice for reduction is applied.
968# Too many classes can take much longer than model building time.
969#
970#max_num_classes_compute_roc = 200
971
972# Maximum number of classes to show in GUI for confusion matrix, showing first max_num_classes_client_and_gui labels.
973# Beyond 6 classes the diagnostics launched from GUI are visually truncated.
974# This will only modify client-GUI launched diagnostics if changed in config.toml and server is restarted,
975# while this value can be changed in expert settings to control experiment plots.
976#
977#max_num_classes_client_and_gui = 10
978
979# If too many classes when computing roc,
980# reduce by "rows" by randomly sampling rows,
981# or reduce by truncating classes to no more than max_num_classes_compute_roc.
982# If have sufficient rows for class count, can reduce by rows.
983#
984#roc_reduce_type = "rows"
985
986#min_roc_sample_size = 1
987
988# Maximum number of rows to obtain confusion matrix related plots during feature evolution.
989# Does not limit final model calculation.
990#
991#max_rows_cm_ga = 500000
992
993# Number of actuals vs. predicted data points to use in order to generate in the relevant
994# plot/graph which is shown at the right part of the screen within an experiment.
995#num_actuals_vs_predicted = 100
996
997# Whether to use feature_brain results even if running new experiments.
998# Feature brain can be risky with some types of changes to experiment setup.
999# Even rescoring may be insufficient, so by default this is False.
1000# For example, one experiment may have training=external validation by accident, and get high score,
1001# and while feature_brain_reset_score='on' means we will rescore, it will have already seen
1002# during training the external validation and leak that data as part of what it learned from.
1003# If this is False, feature_brain_level just sets possible models to use and logs/notifies,
1004# but does not use these feature brain cached models.
1005#
1006#use_feature_brain_new_experiments = false
1007
1008# Whether reuse dataset schema, such as data types set in UI for each column, from parent experiment ('on') or to ignore original dataset schema and only use new schema ('off').
1009# resume_data_schema=True is a basic form of data lineage, but it may not be desirable if data colunn names changed to incompatible data types like int to string.
1010# 'auto': for restart, retrain final pipeline, or refit best models, default is to resume data schema, but new experiments would not by default reuse old schema.
1011# 'on': force reuse of data schema from parent experiment if possible
1012# 'off': don't reuse data schema under any case.
1013# The reuse of the column schema can also be disabled by:
1014# in UI: selecting Parent Experiment as None
1015# in client: setting resume_experiment_id to None
1016#resume_data_schema = "auto"
1017
1018#resume_data_schema_old_logic = false
1019
1020# Whether to show (or use) results from H2O.ai brain: the local caching and smart re-use of prior experiments,
1021# in order to generate more useful features and models for new experiments.
1022# See use_feature_brain_new_experiments for how new experiments by default do not use brain cache.
1023# It can also be used to control checkpointing for experiments that have been paused or interrupted.
1024# DAI will use H2O.ai brain cache if cache file has
1025# a) any matching column names and types for a similar experiment type
1026# b) exactly matches classes
1027# c) exactly matches class labels
1028# d) matches basic time series choices
1029# e) interpretability of cache is equal or lower
1030# f) main model (booster) is allowed by new experiment.
1031# Level of brain to use (for chosen level, where higher levels will also do all lower level operations automatically)
1032# -1 = Don't use any brain cache and don't write any cache
1033# 0 = Don't use any brain cache but still write cache
1034# Use case: Want to save model for later use, but want current model to be built without any brain models
1035# 1 = smart checkpoint from latest best individual model
1036# Use case: Want to use latest matching model, but match can be loose, so needs caution
1037# 2 = smart checkpoint from H2O.ai brain cache of individual best models
1038# Use case: DAI scans through H2O.ai brain cache for best models to restart from
1039# 3 = smart checkpoint like level #1, but for entire population. Tune only if brain population insufficient size
1040# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1041# 4 = smart checkpoint like level #2, but for entire population. Tune only if brain population insufficient size
1042# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1043# 5 = like #4, but will scan over entire brain cache of populations to get best scored individuals
1044# (can be slower due to brain cache scanning if big cache)
1045# 1000 + feature_brain_level (above positive values) = use resumed_experiment_id and actual feature_brain_level,
1046# to use other specific experiment as base for individuals or population,
1047# instead of sampling from any old experiments
1048# GUI has 3 options and corresponding settings:
1049# 1) New Experiment: Uses feature brain level default of 2
1050# 2) New Experiment With Same Settings: Re-uses the same feature brain level as parent experiment
1051# 3) Restart From Last Checkpoint: Resets feature brain level to 1003 and sets experiment ID to resume from
1052# (continued genetic algorithm iterations)
1053# 4) Retrain Final Pipeline: Like Restart but also time=0 so skips any tuning and heads straight to final model
1054# (assumes had at least one tuning iteration in parent experiment)
1055# Other use cases:
1056# a) Restart on different data: Use same column names and fewer or more rows (applicable to 1 - 5)
1057# b) Re-fit only final pipeline: Like (a), but choose time=1 and feature_brain_level=3 - 5
1058# c) Restart with more columns: Add columns, so model builds upon old model built from old column names (1 - 5)
1059# d) Restart with focus on model tuning: Restart, then select feature_engineering_effort = 3 in expert settings
1060# e) can retrain final model but ignore any original features except those in final pipeline (normal retrain but set brain_add_features_for_new_columns=false)
1061# Notes:
1062# 1) In all cases, we first check the resumed experiment id if given, and then the brain cache
1063# 2) For Restart cases, may want to set min_dai_iterations to non-zero to force delayed early stopping, else may not be enough iterations to find better model.
1064# 3) A "New experiment with Same Settings" of a Restart will use feature_brain_level=1003 for default Restart mode (revert to 2, or even 0 if want to start a fresh experiment otherwise)
1065#feature_brain_level = 2
1066
1067# Whether to smartly keep score to avoid re-munging/retraining/rescoring steps brain models ('auto'); always
1068# force all steps for all brain imports ('on'); or never rescore ('off').
1069# 'auto' only rescores if differences in the current and previous experiments warrant it (e.g., column or metric changes).
1070# 'on' is useful when smart similarity checking is not reliable enough.
1071# 'off' is useful when you want to reuse the same features and model for the final model refit, despite changes in seed or other features
1072# that might change the outcome if rescored before reaching the final model.
1073# If set to 'off', no limits are applied to features during brain ingestion,
1074# while you can set brain_add_features_for_new_columns to false if you want to ignore any new columns in the data.
1075# Additionally, any unscored individuals loaded from the parent experiment are not rescored during refit or retrain.
1076# You can also set refit_same_best_individual to True if you want the same best individual (highest-scored model and features) to be used
1077# regardless of any scoring changes.
1078#
1079#feature_brain_reset_score = "auto"
1080
1081#enable_strict_confict_key_check_for_brain = true
1082
1083#allow_change_layer_count_brain = false
1084
1085# Relative number of columns that must match between current reference individual and brain individual.
1086# 0.0: perfect match
1087# 1.0: All columns are different, worst match
1088# e.g. 0.1 implies no more than 10% of columns mismatch between reference set of columns and brain individual.
1089#
1090#brain_maximum_diff_score = 0.1
1091
1092# Maximum number of brain individuals pulled from H2O.ai brain cache for feature_brain_level=1, 2
1093#max_num_brain_indivs = 3
1094
1095# Save feature brain iterations every iter_num % feature_brain_iterations_save_every_iteration == 0, to be able to restart/refit with which_iteration_brain >= 0
1096# 0 means disable
1097#
1098#feature_brain_save_every_iteration = 0
1099
1100# When doing restart or re-fit type feature_brain_level with resumed_experiment_id, choose which iteration to start from, instead of only last best
1101# -1 means just use last best
1102# Usage:
1103# 1) Run one experiment with feature_brain_iterations_save_every_iteration=1 or some other number
1104# 2) Identify which iteration brain dump one wants to restart/refit from
1105# 3) Restart/Refit from original experiment, setting which_iteration_brain to that number in expert settings
1106# Note: If restart from a tuning iteration, this will pull in entire scored tuning population and use that for feature evolution
1107#
1108#which_iteration_brain = -1
1109
1110# When doing re-fit from feature brain, if change columns or features, population of individuals used to refit from may change order of which was best,
1111# leading to better result chosen (False case). But sometimes want to see exact same model/features with only one feature added,
1112# and then would need to set this to True case.
1113# E.g. if refit with just 1 extra column and have interpretability=1, then final model will be same features,
1114# with one more engineered feature applied to that new original feature.
1115#
1116#refit_same_best_individual = false
1117
1118# When doing restart or re-fit of experiment from feature brain,
1119# sometimes user might change data significantly and then warrant
1120# redoing reduction of original features by feature selection, shift detection, and leakage detection.
1121# However, in other cases, if data and all options are nearly (or exactly) identical, then these
1122# steps might change the features slightly (e.g. due to random seed if not setting reproducible mode),
1123# leading to changes in features and model that is refitted. By default, restart and refit avoid
1124# these steps assuming data and experiment setup have no changed significantly.
1125# If check_distribution_shift is forced to on (instead of auto), then this option is ignored.
1126# In order to ensure exact same final pipeline is fitted, one should also set:
1127# 1) brain_add_features_for_new_columns false
1128# 2) refit_same_best_individual true
1129# 3) feature_brain_reset_score 'off'
1130# 4) force_model_restart_to_defaults false
1131# The score will still be reset if the experiment metric chosen changes,
1132# but changes to the scored model and features will be more frozen in place.
1133#
1134#restart_refit_redo_origfs_shift_leak = "[]"
1135
1136# Directory, relative to data_directory, to store H2O.ai brain meta model files
1137#brain_rel_dir = "H2O.ai_brain"
1138
1139# Maximum size in bytes the brain will store
1140# We reserve this memory to save data in order to ensure we can retrieve an experiment if
1141# for any reason it gets interrupted.
1142# -1: unlimited
1143# >=0 number of GB to limit brain to
1144#brain_max_size_GB = 20
1145
1146# Whether to take any new columns and add additional features to pipeline, even if doing retrain final model.
1147# In some cases, one might have a new dataset but only want to keep same pipeline regardless of new columns,
1148# in which case one sets this to False. For example, new data might lead to new dropped features,
1149# due to shift or leak detection. To avoid change of feature set, one can disable all dropping of columns,
1150# but set this to False to avoid adding any columns as new features,
1151# so pipeline is perfectly preserved when changing data.
1152#
1153#brain_add_features_for_new_columns = true
1154
1155# If restart/refit and no longer have the original model class available, be conservative
1156# and go back to defaults for that model class. If False, then try to keep original hyperparameters,
1157# which can fail to work in general.
1158#
1159#force_model_restart_to_defaults = true
1160
1161# Whether to enable early stopping
1162# Early stopping refers to stopping the feature evolution/engineering process
1163# when there is no performance uplift after a certain number of iterations.
1164# After early stopping has been triggered, Driverless AI will initiate the ensemble
1165# process if selected.
1166#early_stopping = true
1167
1168# Whether to enable early stopping per individual
1169# Each individual in the generic algorithm will stop early if no improvement,
1170# and it will no longer be mutated.
1171# Instead, the best individual will be additionally mutated.
1172#early_stopping_per_individual = true
1173
1174# Minimum number of Driverless AI iterations to stop the feature evolution/engineering
1175# process even if score is not improving. Driverless AI needs to run for at least that many
1176# iterations before deciding to stop. It can be seen a safeguard against suboptimal (early)
1177# convergence.
1178#
1179#min_dai_iterations = 0
1180
1181# Maximum features per model (and each model within the final model if ensemble) kept.
1182# Keeps top variable importance features, prunes rest away, after each scoring.
1183# Final ensemble will exclude any pruned-away features and only train on kept features,
1184# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1185# Final scoring pipeline will exclude any pruned-away features,
1186# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1187# -1 means no restrictions except internally-determined memory and interpretability restrictions.
1188# Notes:
1189# * If interpretability > remove_scored_0gain_genes_in_postprocessing_above_interpretability, then
1190# every GA iteration post-processes features down to this value just after scoring them. Otherwise,
1191# only mutations of scored individuals will be pruned (until the final model where limits are strictly applied).
1192# * If ngenes_max is not also limited, then some individuals will have more genes and features until
1193# pruned by mutation or by preparation for final model.
1194# * E.g. to generally limit every iteration to exactly 1 features, one must set nfeatures_max=ngenes_max=1
1195# and remove_scored_0gain_genes_in_postprocessing_above_interpretability=0, but the genetic algorithm
1196# will have a harder time finding good features.
1197#
1198#nfeatures_max = -1
1199
1200# Maximum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1201# Controls number of genes before features are scored, so just randomly samples genes if pruning occurs.
1202# If restriction occurs after scoring features, then aggregated gene importances are used for pruning genes.
1203# Instances includes all possible transformers, including original transformer for numeric features.
1204# -1 means no restrictions except internally-determined memory and interpretability restrictions
1205#
1206#ngenes_max = -1
1207
1208# Like ngenes_max but controls minimum number of genes.
1209#ngenes_min = -1
1210
1211# Like nfeatures_max but controls the minimum number of features.
1212# Useful when DAI generates too few engineered features by default and you want it to create more.
1213# This is especially useful when the dataset has few input features, causing Driverless AI to behave conservatively and generate fewer transformed features.
1214# For example, if only the target encoding transformer is selected, increasing this value allows DAI to explore more possible input features.
1215#nfeatures_min = -1
1216
1217# Whether to limit feature counts by interpretability setting via features_allowed_by_interpretability
1218#limit_features_by_interpretability = true
1219
1220# Whether to use out-of-fold predictions of Word-based CNN Torch models as transformers for NLP if Torch enabled
1221#enable_textcnn = "auto"
1222
1223# [DEPRECATED] Whether to use out-of-fold predictions of Word-based CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1224#enable_tensorflow_textcnn = "auto"
1225
1226# [DEPRECATED] Whether to use out-of-fold predictions of Word-based Bi-GRU TensorFlow models as transformers for NLP if TensorFlow enabled
1227#enable_tensorflow_textbigru = "auto"
1228
1229# Whether to use out-of-fold predictions of Word-based Bi-GRU Torch models as transformers for NLP if Torch enabled
1230#enable_textbigru = "auto"
1231
1232# [DEPRECATED] Whether to use out-of-fold predictions of Character-level CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1233#enable_tensorflow_charcnn = "auto"
1234
1235# Whether to use out-of-fold predictions of Character-level CNN Torch models as transformers for NLP if Torch enabled
1236#enable_charcnn = "auto"
1237
1238# Whether to use pretrained PyTorch models (BERT Transformer) as transformers for NLP tasks. Fits a linear model on top of pretrained embeddings. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. GPU(s) are highly recommended.Reduce string_col_as_text_min_relative_cardinality closer to 0.0 and string_col_as_text_threshold closer to 0.0 to force string column to be treated as text despite low number of uniques.
1239#enable_pytorch_nlp_transformer = "auto"
1240
1241# More rows can slow down the fitting process. Recommended values are less than 100000.
1242#pytorch_nlp_transformer_max_rows_linear_model = 50000
1243
1244# Whether to use pretrained PyTorch models and fine-tune them for NLP tasks. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. These models are only using the first text column, and can be slow to train. GPU(s) are highly recommended.Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1245#enable_pytorch_nlp_model = "auto"
1246
1247# Select which pretrained PyTorch NLP model(s) to use. Non-default ones might have no MOJO support. Requires internet connection. Only if PyTorch models or transformers for NLP are set to 'on'.
1248#pytorch_nlp_pretrained_models = "['bert-base-uncased', 'distilbert-base-uncased', 'bert-base-multilingual-cased']"
1249
1250# [DEPRECATED] Max. number of epochs for TensorFlow models for making NLP features
1251#tensorflow_max_epochs_nlp = 2
1252
1253# Max. number of epochs for Torch models for making NLP features
1254#pytorch_max_epochs_nlp = 2
1255
1256# Accuracy setting equal and above which will add all enabled TensorFlow NLP models below at start of experiment for text dominated problems
1257# when TensorFlow NLP transformers are set to auto. If set to on, this parameter is ignored.
1258# Otherwise, at lower accuracy, TensorFlow NLP transformations will only be created as a mutation.
1259#
1260#enable_tensorflow_nlp_accuracy_switch = 5
1261
1262# Path to pretrained embeddings for TensorFlow NLP models, can be a path in local file system or an S3 location (s3://).
1263# For example, download and unzip https://nlp.stanford.edu/data/glove.6B.zip
1264# tensorflow_nlp_pretrained_embeddings_file_path = /path/on/server/to/glove.6B.300d.txt
1265#
1266#tensorflow_nlp_pretrained_embeddings_file_path = ""
1267
1268#tensorflow_nlp_pretrained_s3_access_key_id = ""
1269
1270#tensorflow_nlp_pretrained_s3_secret_access_key = ""
1271
1272# Allow training of all weights of the neural network graph, including the pretrained embedding layer weights. If disabled, then the embedding layer is frozen, but all other weights are still fine-tuned.
1273#tensorflow_nlp_pretrained_embeddings_trainable = false
1274
1275#tensorflow_nlp_have_gpus_in_production = false
1276
1277# Path to pretrained embeddings for Torch NLP models, can be a path in local file system or an S3 location (s3://).
1278# For example, download and unzip https://nlp.stanford.edu/data/glove.6B.zip
1279# nlp_pretrained_embeddings_file_path = /path/on/server/to/glove.6B.300d.txt
1280#
1281#nlp_pretrained_embeddings_file_path = ""
1282
1283#nlp_pretrained_s3_access_key_id = ""
1284
1285#nlp_pretrained_s3_secret_access_key = ""
1286
1287# Allow training of all weights of the neural network graph, including the pretrained embedding layer weights. If disabled, then the embedding layer is frozen, but all other weights are still fine-tuned.
1288#nlp_pretrained_embeddings_trainable = false
1289
1290#bert_migration_timeout_secs = 600
1291
1292#enable_bert_transformer_acceptance_test = false
1293
1294#enable_bert_model_acceptance_test = false
1295
1296# Whether to parallelize tokenization for BERT Models/Transformers.
1297#pytorch_tokenizer_parallel = true
1298
1299# Number of epochs for fine-tuning of PyTorch NLP models. Larger values can increase accuracy but take longer to train.
1300#pytorch_nlp_fine_tuning_num_epochs = -1
1301
1302# Batch size for PyTorch NLP models. Larger models and larger batch sizes will use more memory.
1303#pytorch_nlp_fine_tuning_batch_size = -1
1304
1305# Maximum sequence length (padding length) for PyTorch NLP models. Larger models and larger padding lengths will use more memory.
1306#pytorch_nlp_fine_tuning_padding_length = -1
1307
1308# Path to pretrained PyTorch NLP models. Note that this can be either a path in the local file system
1309# (/path/on/server/to/bert_models_folder), an URL or a S3 location (s3://).
1310# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/bert_models.zip
1311# and unzip and store it in a directory on the instance where DAI is installed.
1312# ``pytorch_nlp_pretrained_models_dir=/path/on/server/to/bert_models_folder``
1313#
1314#pytorch_nlp_pretrained_models_dir = ""
1315
1316#pytorch_nlp_pretrained_s3_access_key_id = ""
1317
1318#pytorch_nlp_pretrained_s3_secret_access_key = ""
1319
1320# Fraction of text columns out of all features to be considered a text-dominated problem
1321#text_fraction_for_text_dominated_problem = 0.3
1322
1323# Fraction of text transformers to all transformers above which to trigger that text dominated problem
1324#text_transformer_fraction_for_text_dominated_problem = 0.3
1325
1326# Whether to reduce options for text-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1327#text_dominated_limit_tuning = true
1328
1329# Whether to reduce options for image-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1330#image_dominated_limit_tuning = true
1331
1332# Threshold for average string-is-text score as determined by internal heuristics
1333# It decides when a string column will be treated as text (for an NLP problem) or just as
1334# a standard categorical variable.
1335# Higher values will favor string columns as categoricals, lower values will favor string columns as text.
1336# Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1337#string_col_as_text_threshold = 0.3
1338
1339# Threshold for string columns to be treated as text during preview - should be less than string_col_as_text_threshold to allow data with first 20 rows that don't look like text to still work for Text-only transformers (0.0 - text, 1.0 - string)
1340#string_col_as_text_threshold_preview = 0.1
1341
1342# Mininum fraction of unique values for string columns to be considered as possible text (otherwise categorical)
1343#string_col_as_text_min_relative_cardinality = 0.1
1344
1345# Mininum number of uniques for string columns to be considered as possible text (if not already)
1346#string_col_as_text_min_absolute_cardinality = 10000
1347
1348# If disabled, require 2 or more alphanumeric characters for a token in Text (Count and TF/IDF) transformers, otherwise create tokens out of single alphanumeric characters. True means that 'Street 3' is tokenized into 'Street' and '3', while False means that it's tokenized into 'Street'.
1349#tokenize_single_chars = true
1350
1351# Supported image types. URIs with these endings will be considered as image paths (local or remote).
1352#supported_image_types = "['jpg', 'jpeg', 'png', 'bmp', 'ppm', 'tif', 'tiff', 'JPG', 'JPEG', 'PNG', 'BMP', 'PPM', 'TIF', 'TIFF']"
1353
1354# Whether to create absolute paths for images when importing datasets containing images. Can faciliate testing or re-use of frames for scoring.
1355#image_paths_absolute = false
1356
1357# [DEPRECATED] Whether to use pretrained deep learning models for processing of image data as part of the feature engineering pipeline. A column of URIs to images (jpg, png, etc.) will be converted to a numeric representation using ImageNet-pretrained deep learning models. If no GPUs are found, then must be set to 'on' to enable.
1358#enable_tensorflow_image = "auto"
1359
1360# Whether to use pretrained deep learning models for processing of image data as part of the feature engineering pipeline. A column of URIs to images (jpg, png, etc.) will be converted to a numeric representation using ImageNet-pretrained deep learning models. If no GPUs are found, then must be set to 'on' to enable.
1361#enable_image_transformer = "auto"
1362
1363# [DEPRECATED] Supported ImageNet pretrained architectures for Image Transformer. Non-default ones will require internet access to download pretrained models from H2O S3 buckets (To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip and unzip inside image_pretrained_models_dir).
1364#tensorflow_image_pretrained_models = "['xception']"
1365
1366# Supported ImageNet pretrained architectures for Image V2 Transformer. Non-default ones will require internet access to download pretrained models from H2O S3 buckets (To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_2_3_0.zip and unzip inside image_pretrained_models_dir).
1367#image_transformer_pretrained_models = "['levit']"
1368
1369# [DEPRECATED] Dimensionality of feature (embedding) space created by Image Transformer. If more than one is selected, multiple transformers can be active at the same time.
1370#tensorflow_image_vectorization_output_dimension = "[100]"
1371
1372# Dimensionality of feature (embedding) space created by Image V2 Transformer. If more than one is selected, multiple transformers can be active at the same time.
1373#image_transformer_vectorization_output_dimension = "[100]"
1374
1375# [DEPRECATED] Enable fine-tuning of the ImageNet pretrained models used for the Image Transformer. Enabling this will slow down training, but should increase accuracy.
1376#tensorflow_image_fine_tune = false
1377
1378# Enable fine-tuning of the ImageNet pretrained models used for the Image V2 Transformer. Enabling this will slow down training, but should increase accuracy.
1379#image_transformer_fine_tune = false
1380
1381# [DEPRECATED] Number of epochs for fine-tuning of ImageNet pretrained models used for the Image Transformer.
1382#tensorflow_image_fine_tuning_num_epochs = 2
1383
1384# Number of epochs for fine-tuning of ImageNet pretrained models used for the Image V2 Transformer.
1385#image_transformer_fine_tuning_num_epochs = 2
1386
1387# [DEPRECATED] The list of possible image augmentations to apply while fine-tuning the ImageNet pretrained models used for the Image Transformer. Details about individual augmentations could be found here: https://albumentations.ai/docs/.
1388#tensorflow_image_augmentations = "['HorizontalFlip']"
1389
1390# The list of possible image augmentations to apply while fine-tuning the ImageNet pretrained models used for the Image V2 Transformer. Details about individual augmentations could be found here: https://albumentations.ai/docs/. Note: Does not apply to tf_efficientnetv2 as the recommended transformers from huggingface will be used.
1391#default_image_augmentations = "['HorizontalFlip']"
1392
1393# [DEPRECATED] Batch size for Image Transformer. Larger architectures and larger batch sizes will use more memory.
1394#tensorflow_image_batch_size = -1
1395
1396# Batch size for Image V2 Transformer. Larger architectures and larger batch sizes will use more memory.Note. Driverless will automatically find the most appropriate batch size if set to -1 (or non-positive).
1397#image_transformer_batch_size = -1
1398
1399# [DEPRECATED] Path to pretrained Image models.
1400# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip,
1401# then extract it in a directory on the instance where Driverless AI is installed.
1402#
1403#tensorflow_image_pretrained_models_dir = "./pretrained/image/"
1404
1405# Path to pretrained Image models.
1406# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_2_3_0.zip,
1407# then extract it in a directory on the instance where Driverless AI is installed.
1408#
1409#image_pretrained_models_dir = "./pretrained/image/"
1410
1411# Max. number of seconds to wait for image download if images are provided by URL
1412#image_download_timeout = 60
1413
1414# Maximum fraction of missing elements in a string column for it to be considered as possible image paths (URIs)
1415#string_col_as_image_max_missing_fraction = 0.1
1416
1417# Fraction of (unique) image URIs that need to have valid endings (as defined by string_col_as_image_valid_types) for a string column to be considered as image data
1418#string_col_as_image_min_valid_types_fraction = 0.8
1419
1420# [DEPRECATED] Whether to use GPU(s), if available, to transform images into embeddings with Image Transformer. Can lead to significant speedups.
1421#tensorflow_image_use_gpu = true
1422
1423# Whether to use GPU(s), if available, to transform images into embeddings with Image V2 Transformer. Can lead to significant speedups.
1424#image_transformer_use_gpu = true
1425
1426# Nominally, the time dial controls the search space, with higher time trying more options, but any keys present in this dictionary will override the automatic choices.
1427# e.g. ``params_image_auto_search_space="{'augmentation': ['safe'], 'crop_strategy': ['Resize'], 'optimizer': ['AdamW'], 'dropout': [0.1], 'epochs_per_stage': [5], 'warmup_epochs': [0], 'mixup': [0.0], 'cutmix': [0.0], 'global_pool': ['avg'], 'learning_rate': [3e-4]}"``
1428# Options, e.g. used for time>=8
1429# # Overfit Protection Options:
1430# 'augmentation': ``["safe", "semi_safe", "hard"]``
1431# 'crop_strategy': ``["Resize", "RandomResizedCropSoft", "RandomResizedCropHard"]``
1432# 'dropout': ``[0.1, 0.3, 0.5]``
1433# # Global Pool Options:
1434# avgmax -- sum of AVG and MAX poolings
1435# catavgmax -- concatenation of AVG and MAX poolings
1436# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/adaptive_avgmax_pool.py
1437# ``'global_pool': ['avg', 'avgmax', 'catavgmax']``
1438# # Regression: No MixUp and CutMix:
1439# ``'mixup': [0.0]``
1440# ``'cutmix': [0.0]``
1441# # Classification: Beta distribution coeff to generate weights for MixUp:
1442# ``'mixup': [0.0, 0.4, 1.0, 3.0]``
1443# ``'cutmix': [0.0, 0.4, 1.0, 3.0]``
1444# # Optimization Options:
1445# ``'epochs_per_stage': [5, 10, 15]`` # from 40 to 135 epochs
1446# ``'warmup_epochs': [0, 0.5, 1]``
1447# ``'optimizer': ["AdamW", "SGD"]``
1448# ``'learning_rate': [1e-3, 3e-4, 1e-4]``
1449#params_image_auto_search_space = "{}"
1450
1451# Nominally, the accuracy dial controls the architectures considered if this is left empty,
1452# but one can choose specific ones. The options in the list are ordered by complexity.
1453#image_auto_arch = "[]"
1454
1455# Any images smaller are upscaled to the minimum. Default is 64, but can be as small as 32 given the pooling layers used.
1456#image_auto_min_shape = 64
1457
1458# 0 means automatic based upon time dial of min(1, time//2).
1459#image_auto_num_final_models = 0
1460
1461# 0 means automatic based upon time dial of max(4 * (time - 1), 2).
1462#image_auto_num_models = 0
1463
1464# 0 means automatic based upon time dial of time + 1 if time < 6 else time - 1.
1465#image_auto_num_stages = 0
1466
1467# 0 means automatic based upon time dial or number of models and stages
1468# set by image_auto_num_models and image_auto_num_stages.
1469#image_auto_iterations = 0
1470
1471# 0.0 means automatic based upon the current stage, where stage 0 uses half, stage 1 uses 3/4, and stage 2 uses full image.
1472# One can pass 1.0 to override and always use full image. 0.5 would mean use half.
1473#image_auto_shape_factor = 0.0
1474
1475# Control maximum number of cores to use for image auto model parallel data management. 0 will disable mp: https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html
1476#max_image_auto_ddp_cores = 10
1477
1478# Percentile value cutoff of input text token lengths for nlp deep learning models
1479#text_dl_token_pad_percentile = 99
1480
1481# Maximum token length of input text to be used in nlp deep learning models
1482#text_dl_token_pad_max = 512
1483
1484# Interpretability setting equal and above which will use automatic monotonicity constraints in
1485# XGBoostGBM/LightGBM/DecisionTree models.
1486#
1487#monotonicity_constraints_interpretability_switch = 7
1488
1489# For models that support monotonicity constraints, and if enabled, show automatically determined monotonicity constraints for each feature going into the model based on its correlation with the target. 'low' shows only monotonicity constraint direction. 'medium' shows correlation of positively and negatively constraint features. 'high' shows all correlation values.
1490#monotonicity_constraints_log_level = "medium"
1491
1492# Threshold, of Pearson product-moment correlation coefficient between numerical or encoded transformed
1493# feature and target, above (below negative for) which will enforce positive (negative) monotonicity
1494# for XGBoostGBM, LightGBM and DecisionTree models.
1495# Enabled when interpretability >= monotonicity_constraints_interpretability_switch config toml value.
1496# Only if monotonicity_constraints_dict is not provided.
1497#
1498#monotonicity_constraints_correlation_threshold = 0.1
1499
1500# If enabled, only monotonic features with +1/-1 constraints will be passed to the model(s), and features
1501# without monotonicity constraints (0, as set by monotonicity_constraints_dict or determined automatically)
1502# will be dropped. Otherwise all features will be in the model.
1503# Only active when interpretability >= monotonicity_constraints_interpretability_switch or
1504# monotonicity_constraints_dict is provided.
1505#
1506#monotonicity_constraints_drop_low_correlation_features = false
1507
1508# Manual override for monotonicity constraints. Mapping of original numeric features to desired constraint
1509# (1 for pos, -1 for neg, or 0 to disable. True can be set for automatic handling, False is same as 0).
1510# Features that are not listed here will be treated automatically,
1511# and so get no constraint (i.e., 0) if interpretability < monotonicity_constraints_interpretability_switch
1512# and otherwise the constraint is automatically determined from the correlation between each feature and the target.
1513# Example: {'PAY_0': -1, 'PAY_2': -1, 'AGE': -1, 'BILL_AMT1': 1, 'PAY_AMT1': -1}
1514#
1515#monotonicity_constraints_dict = "{}"
1516
1517# Exploring feature interactions can be important in gaining better predictive performance.
1518# The interaction can take multiple forms (i.e. feature1 + feature2 or feature1 * feature2 + ... featureN)
1519# Although certain machine learning algorithms (like tree-based methods) can do well in
1520# capturing these interactions as part of their training process, still generating them may
1521# help them (or other algorithms) yield better performance.
1522# The depth of the interaction level (as in "up to" how many features may be combined at
1523# once to create one single feature) can be specified to control the complexity of the
1524# feature engineering process. For transformers that use both numeric and categorical features, this constrains
1525# the number of each type, not the total number. Higher values might be able to make more predictive models
1526# at the expense of time (-1 means automatic).
1527#
1528#max_feature_interaction_depth = -1
1529
1530# Instead of sampling from min to max (up to max_feature_interaction_depth unless all specified)
1531# columns allowed for each transformer (0), choose fixed non-zero number of columns to use.
1532# Can make same as number of columns to use all columns for each transformers if allowed by each transformer.
1533# -n can be chosen to do 50/50 sample and fixed of n features.
1534#
1535#fixed_feature_interaction_depth = 0
1536
1537# Accuracy setting equal and above which enables tuning of model parameters
1538# Only applicable if parameter_tuning_num_models=-1 (auto)
1539#tune_parameters_accuracy_switch = 3
1540
1541# Accuracy setting equal and above which enables tuning of target transform for regression.
1542# This is useful for time series when instead of predicting the actual target value, it
1543# might be better to predict a transformed target variable like sqrt(target) or log(target)
1544# as a means to control for outliers.
1545#tune_target_transform_accuracy_switch = 5
1546
1547# Select a target transformation for regression problems. Must be one of: ['auto',
1548# 'identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'log_noclip', 'square',
1549# 'sqrt', 'double_sqrt', 'inverse', 'anscombe', 'logit', 'sigmoid'].
1550# If set to 'auto', will automatically pick the best target transformer (if accuracy is set to
1551# tune_target_transform_accuracy_switch or larger, considering interpretability level of each target transformer),
1552# otherwise will fall back to 'identity_noclip' (easiest to interpret, Shapley values are in original space, etc.).
1553# All transformers except for 'center', 'standardize', 'identity_noclip' and 'log_noclip' perform clipping
1554# to constrain the predictions to the domain of the target in the training data. Use 'center', 'standardize',
1555# 'identity_noclip' or 'log_noclip' to disable clipping and to allow predictions outside of the target domain observed in
1556# the training data (for parametric models or custom models that support extrapolation).
1557#
1558#target_transformer = "auto"
1559
1560# Select list of target transformers to use for tuning. Only for target_transformer='auto' and accuracy >= tune_target_transform_accuracy_switch.
1561#
1562#target_transformer_tuning_choices = "['identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'square', 'sqrt', 'double_sqrt', 'anscombe', 'logit', 'sigmoid']"
1563
1564# Tournament style (method to decide which models are best at each iteration)
1565# 'auto' : Choose based upon accuracy and interpretability
1566# 'uniform' : all individuals in population compete to win as best (can lead to all, e.g. LightGBM models in final ensemble, which may not improve ensemble performance due to lack of diversity)
1567# 'model' : individuals with same model type compete (good if multiple models do well but some models that do not do as well still contribute to improving ensemble)
1568# 'feature' : individuals with similar feature types compete (good if target encoding, frequency encoding, and other feature sets lead to good results)
1569# 'fullstack' : Choose among optimal model and feature types
1570# 'model' and 'feature' styles preserve at least one winner for each type (and so 2 total indivs of each type after mutation)
1571# For each case, a round robin approach is used to choose best scores among type of models to choose from.
1572# If enable_genetic_algorithm=='Optuna', then every individual is self-mutated without any tournament
1573# during the genetic algorithm. The tournament is only used to prune-down individuals for, e.g.,
1574# tuning -> evolution and evolution -> final model.
1575#
1576#tournament_style = "auto"
1577
1578# Interpretability above which will use 'uniform' tournament style
1579#tournament_uniform_style_interpretability_switch = 8
1580
1581# Accuracy below which will use uniform style if tournament_style = 'auto' (regardless of other accuracy tournament style switch values)
1582#tournament_uniform_style_accuracy_switch = 6
1583
1584# Accuracy equal and above which uses model style if tournament_style = 'auto'
1585#tournament_model_style_accuracy_switch = 6
1586
1587# Accuracy equal and above which uses feature style if tournament_style = 'auto'
1588#tournament_feature_style_accuracy_switch = 13
1589
1590# Accuracy equal and above which uses fullstack style if tournament_style = 'auto'
1591#tournament_fullstack_style_accuracy_switch = 13
1592
1593# Whether to use penalized score for GA tournament or actual score
1594#tournament_use_feature_penalized_score = true
1595
1596# Whether to keep poor scores for small data (<10k rows) in case exploration will find good model.
1597# sets tournament_remove_poor_scores_before_evolution_model_factor=1.1
1598# tournament_remove_worse_than_constant_before_evolution=false
1599# tournament_keep_absolute_ok_scores_before_evolution_model_factor=1.1
1600# tournament_remove_poor_scores_before_final_model_factor=1.1
1601# tournament_remove_worse_than_constant_before_final_model=true
1602#tournament_keep_poor_scores_for_small_data = true
1603
1604# Factor (compared to best score plus each score) beyond which to drop poorly scoring models before evolution.
1605# This is useful in cases when poorly scoring models take a long time to train.
1606#tournament_remove_poor_scores_before_evolution_model_factor = 0.7
1607
1608# For before evolution after tuning, whether to remove models that are worse than (optimized to scorer) constant prediction model
1609#tournament_remove_worse_than_constant_before_evolution = true
1610
1611# For before evolution after tuning, where on scale of 0 (perfect) to 1 (constant model) to keep ok scores by absolute value.
1612#tournament_keep_absolute_ok_scores_before_evolution_model_factor = 0.2
1613
1614# Factor (compared to best score) beyond which to drop poorly scoring models before building final ensemble. This is useful in cases when poorly scoring models take a long time to train.
1615#tournament_remove_poor_scores_before_final_model_factor = 0.3
1616
1617# For before final model after evolution, whether to remove models that are worse than (optimized to scorer) constant prediction model
1618#tournament_remove_worse_than_constant_before_final_model = true
1619
1620# Driverless AI uses a genetic algorithm (GA) to find the best features, best models and
1621# best hyper parameters for these models. The GA facilitates getting good results while not
1622# requiring torun/try every possible model/feature/parameter. This version of GA has
1623# reinforcement learning elements - it uses a form of exploration-exploitation to reach
1624# optimum solutions. This means it will capitalise on models/features/parameters that seem # to be working well and continue to exploit them even more, while allowing some room for
1625# trying new (and semi-random) models/features/parameters to avoid settling on a local
1626# minimum.
1627# These models/features/parameters tried are what-we-call individuals of a population. More # individuals connote more models/features/parameters to be tried and compete to find the best # ones.
1628#num_individuals = 2
1629
1630# set fixed number of individuals (if > 0) - useful to compare different hardware configurations. If want 3 individuals in GA race to be preserved, choose 6, since need 1 mutatable loser per surviving individual.
1631#fixed_num_individuals = 0
1632
1633#max_fold_reps_hard_limit = 20
1634
1635# number of unique targets or folds counts after which switch to faster/simpler non-natural sorting and print outs
1636#sanitize_natural_sort_limit = 1000
1637
1638# number of fold ids to report cardinality for, both most common (head) and least common (tail)
1639#head_tail_fold_id_report_length = 30
1640
1641# Whether target encoding (CV target encoding, weight of evidence, etc.) could be enabled
1642# Target encoding refers to several different feature transformations (primarily focused on
1643# categorical data) that aim to represent the feature using information of the actual
1644# target variable. A simple example can be to use the mean of the target to replace each
1645# unique category of a categorical feature. This type of features can be very predictive,
1646# but are prone to overfitting and require more memory as they need to store mappings of
1647# the unique categories and the target values.
1648#
1649#enable_target_encoding = "auto"
1650
1651# For target encoding, whether a model is used to compute Ginis for checking sanity of transformer. Requires cvte_cv_in_cv to be enabled. If enabled, CV-in-CV isn't done in case the check fails.
1652#cvte_cv_in_cv_use_model = false
1653
1654# For target encoding,
1655# whether an outer level of cross-fold validation is performed,
1656# in cases when GINI is detected to flip sign (or have inconsistent sign for weight of evidence)
1657# between fit_transform on training, transform on training, and transform on validation data.
1658# The degree to which GINI is poor is also used to perform fold-averaging of look-up tables instead
1659# of using global look-up tables.
1660#
1661#cvte_cv_in_cv = true
1662
1663# For target encoding,
1664# when an outer level of cross-fold validation is performed,
1665# increase number of outer folds or abort target encoding when GINI between feature and target
1666# are not close between fit_transform on training, transform on training, and transform on validation data.
1667#
1668#cv_in_cv_overconfidence_protection = "auto"
1669
1670#cv_in_cv_overconfidence_protection_factor = 3.0
1671
1672#enable_lexilabel_encoding = "off"
1673
1674#enable_isolation_forest = "off"
1675
1676# Whether one hot encoding could be enabled. If auto, then only applied for small data and GLM.
1677#enable_one_hot_encoding = "auto"
1678
1679# Limit number of output features (total number of bins) created by all BinnerTransformers based on this
1680# value, scaled by accuracy, interpretability and dataset size. 0 means unlimited.
1681#binner_cardinality_limiter = 50
1682
1683# Whether simple binning of numeric features should be enabled by default. If auto, then only for
1684# GLM/FTRL/GrowNet for time-series or for interpretability >= 6. Binning can help linear (or simple)
1685# models by exposing more signal for features that are not linearly correlated with the target. Note that
1686# NumCatTransformer and NumToCatTransformer already do binning, but also perform target encoding, which makes them
1687# less interpretable. The BinnerTransformer is more interpretable, and also works for time series.
1688#enable_binning = "auto"
1689
1690# Tree uses XGBoost to find optimal split points for binning of numeric features.
1691# Quantile use quantile-based binning. Might fall back to quantile-based if too many classes or
1692# not enough unique values.
1693#binner_bin_method = "['tree']"
1694
1695# If enabled, will attempt to reduce the number of bins during binning of numeric features.
1696# Applies to both tree-based and quantile-based bins.
1697#binner_minimize_bins = true
1698
1699# Given a set of bins (cut points along min...max), the encoding scheme converts the original
1700# numeric feature values into the values of the output columns (one column per bin, and one extra bin for
1701# missing values if any).
1702# Piecewise linear is 0 left of the bin, and 1 right of the bin, and grows linearly from 0 to 1 inside the bin.
1703# Binary is 1 inside the bin and 0 outside the bin. Missing value bin encoding is always binary, either 0 or 1.
1704# If no missing values in the data, then there is no missing value bin.
1705# Piecewise linear helps to encode growing values and keeps smooth transitions across the bin
1706# boundaries, while binary is best suited for detecting specific values in the data.
1707# Both are great at providing features to models that otherwise lack non-linear pattern detection.
1708#binner_encoding = "['piecewise_linear', 'binary']"
1709
1710# If enabled (default), include the original feature value as a output feature for the BinnerTransformer.
1711# This ensures that the BinnerTransformer never has less signal than the OriginalTransformer, since they can
1712# be chosen exclusively.
1713#
1714#binner_include_original = true
1715
1716#isolation_forest_nestimators = 200
1717
1718# Transformer display names to indicate which transformers to use in experiment.
1719# More information for these transformers can be viewed here:
1720# http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/transformations.html
1721# This section allows including/excluding these transformations and may be useful when
1722# simpler (more interpretable) models are sought at the expense of accuracy.
1723# the interpretability setting)
1724# for multi-class: '['NumCatTETransformer', 'TextLinModelTransformer',
1725# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'ClusterDistTransformer',
1726# 'WeightOfEvidenceTransformer', 'TruncSVDNumTransformer', 'CVCatNumEncodeTransformer',
1727# 'DatesTransformer', 'TextTransformer', 'OriginalTransformer',
1728# 'NumToCatWoETransformer', 'NumToCatTETransformer', 'ClusterTETransformer',
1729# 'InteractionsTransformer']'
1730# for regression/binary: '['TextTransformer', 'ClusterDistTransformer',
1731# 'OriginalTransformer', 'TextLinModelTransformer', 'NumToCatTETransformer',
1732# 'DatesTransformer', 'WeightOfEvidenceTransformer', 'InteractionsTransformer',
1733# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'NumCatTETransformer',
1734# 'NumToCatWoETransformer', 'TruncSVDNumTransformer', 'ClusterTETransformer',
1735# 'CVCatNumEncodeTransformer']'
1736# This list appears in the experiment logs (search for 'Transformers used')
1737#
1738#included_transformers = "[]"
1739
1740# Auxiliary to included_transformers
1741# e.g. to disable all Target Encoding: excluded_transformers =
1742# '['NumCatTETransformer', 'CVTargetEncodeF', 'NumToCatTETransformer',
1743# 'ClusterTETransformer']'.
1744# Does not affect transformers used for preprocessing with included_pretransformers.
1745#
1746#excluded_transformers = "[]"
1747
1748# Exclude list of genes (i.e. genes (built on top of transformers) to not use,
1749# independent of the interpretability setting)
1750# Some transformers are used by multiple genes, so this allows different control over feature engineering
1751# for multi-class: '['InteractionsGene', 'WeightOfEvidenceGene',
1752# 'NumToCatTargetEncodeSingleGene', 'OriginalGene', 'TextGene', 'FrequentGene',
1753# 'NumToCatWeightOfEvidenceGene', 'NumToCatWeightOfEvidenceMonotonicGene', '
1754# CvTargetEncodeSingleGene', 'DateGene', 'NumToCatTargetEncodeMultiGene', '
1755# DateTimeGene', 'TextLinRegressorGene', 'ClusterIDTargetEncodeSingleGene',
1756# 'CvCatNumEncodeGene', 'TruncSvdNumGene', 'ClusterIDTargetEncodeMultiGene',
1757# 'NumCatTargetEncodeMultiGene', 'CvTargetEncodeMultiGene', 'TextLinClassifierGene',
1758# 'NumCatTargetEncodeSingleGene', 'ClusterDistGene']'
1759# for regression/binary: '['CvTargetEncodeSingleGene', 'NumToCatTargetEncodeSingleGene',
1760# 'CvCatNumEncodeGene', 'ClusterIDTargetEncodeSingleGene', 'TextLinRegressorGene',
1761# 'CvTargetEncodeMultiGene', 'ClusterDistGene', 'OriginalGene', 'DateGene',
1762# 'ClusterIDTargetEncodeMultiGene', 'NumToCatTargetEncodeMultiGene',
1763# 'NumCatTargetEncodeMultiGene', 'TextLinClassifierGene', 'WeightOfEvidenceGene',
1764# 'FrequentGene', 'TruncSvdNumGene', 'InteractionsGene', 'TextGene',
1765# 'DateTimeGene', 'NumToCatWeightOfEvidenceGene',
1766# 'NumToCatWeightOfEvidenceMonotonicGene', ''NumCatTargetEncodeSingleGene']'
1767# This list appears in the experiment logs (search for 'Genes used')
1768# e.g. to disable interaction gene, use: excluded_genes =
1769# '['InteractionsGene']'.
1770# Does not affect transformers used for preprocessing with included_pretransformers.
1771#
1772#excluded_genes = "[]"
1773
1774# "Include specific models" lets you choose a set of models that will be considered during experiment training. The
1775# individual model settings and its AUTO / ON / OFF mean following: AUTO lets the internal decision mechanisms determine
1776# whether the model should be used during training; ON will try to force the use of the model; OFF turns the model
1777# off during training (it is equivalent of deselecting the model in the "Include specific models" picker).
1778#
1779#included_models = "[]"
1780
1781# Auxiliary to included_models
1782#excluded_models = "[]"
1783
1784#included_scorers = "[]"
1785
1786# Select transformers to be used for preprocessing before other transformers operate.
1787# Pre-processing transformers can potentially take any original features and output
1788# arbitrary features, which will then be used by the normal layer of transformers
1789# whose selection is controlled by toml included_transformers or via the GUI
1790# "Include specific transformers".
1791# Notes:
1792# 1) preprocessing transformers (and all other layers of transformers) are part of the python and (if applicable) mojo scoring packages.
1793# 2) any BYOR transformer recipe or native DAI transformer can be used as a preprocessing transformer.
1794# So, e.g., a preprocessing transformer can do interactions, string concatenations, date extractions as a preprocessing step,
1795# and next layer of Date and DateTime transformers will use that as input data.
1796# Caveats:
1797# 1) one cannot currently do a time-series experiment on a time_column that hasn't yet been made (setup of experiment only knows about original data, not transformed)
1798# However, one can use a run-time data recipe to (e.g.) convert a float date-time into string date-time, and this will
1799# be used by DAIs Date and DateTime transformers as well as auto-detection of time series.
1800# 2) in order to do a time series experiment with the GUI/client auto-selecting groups, periods, etc. the dataset
1801# must have time column and groups prepared ahead of experiment by user or via a one-time data recipe.
1802#
1803#included_pretransformers = "[]"
1804
1805# Auxiliary to included_pretransformers
1806#excluded_pretransformers = "[]"
1807
1808#include_all_as_pretransformers_if_none_selected = false
1809
1810#force_include_all_as_pretransformers_if_none_selected = false
1811
1812# Number of full pipeline layers
1813# (not including preprocessing layer when included_pretransformers is not empty).
1814#
1815#num_pipeline_layers = 1
1816
1817# There are 2 data recipes:
1818# 1) that adds new dataset or modifies dataset outside experiment by file/url (pre-experiment data recipe)
1819# 2) that modifies dataset during experiment and python scoring (run-time data recipe)
1820# This list applies to the 2nd case. One can use the same data recipe code for either case, but note:
1821# A) the 1st case can make any new data, but is not part of scoring package.
1822# B) the 2nd case modifies data during the experiment, so needs some original dataset.
1823# The recipe can still create all new features, as long as it has same *name* for:
1824# target, weight_column, fold_column, time_column, time group columns.
1825#
1826#included_datas = "[]"
1827
1828# Auxiliary to included_datas
1829#excluded_datas = "[]"
1830
1831# Custom individuals to use in experiment.
1832# DAI contains most information about model type, model hyperparameters, data science types for input features, transformers used, and transformer parameters an Individual Recipe (an object that is evolved by mutation within the context of DAI's genetic algorithm).
1833# Every completed experiment auto-generates python code for the experiment that corresponds to the individual(s) used to build the final model. This auto-generated python code can be edited offline and uploaded as a recipe, or it can be edited within the custom recipe management editor and saved. This allowed one a code-first access to a significant portion of DAI's internal transformer and model generation.
1834# Choices are:
1835# * Empty means all individuals are freshly generated and treated by DAI's AutoML as a container of model and transformer choices.
1836# * Recipe display names of custom individuals, usually chosen via the UI. If the number of included custom individuals is less than DAI would need, then the remaining individuals are freshly generated.
1837# The expert experiment-level option fixed_num_individuals can be used to enforce how many individuals to use in evolution stage.
1838# The expert experiment-level option fixed_ensemble_level can be used to enforce how many individuals (each with one base model) will be used in the final model.
1839# These individuals act in similar way as the feature brain acts for restart and retrain/refit, and one can retrain/refit custom individuals (i.e. skip the tuning and evolution stages) to use them in building a final model.
1840# See toml make_python_code for more details.
1841#included_individuals = "[]"
1842
1843# Auxiliary to included_individuals
1844#excluded_individuals = "[]"
1845
1846# Whether to generate python code for the best individuals for the experiment.
1847# This python code contains a CustomIndividual class that is a recipe that can be edited and customized. The CustomIndividual class itself can also be customized for expert use.
1848# By default, 'auto' means on.
1849# At the end of an experiment, the summary zip contains auto-generated python code for the individuals used in the experiment, including the last best population (best_population_indivXX.py where XX iterates the population), last best individual (best_individual.py), final base models (final_indivYY.py where YY iterates the final base models).
1850# The summary zip also contains an example_indiv.py file that generates other transformers that may be useful that did not happen to be used in the experiment.
1851# In addition, the GUI and python client allow one to generate custom individuals from an aborted or finished experiment.
1852# For finished experiments, this will provide a zip file containing the final_indivYY.py files, and for aborted experiments this will contain the best population and best individual files.
1853# See included_individuals for more details.
1854#make_python_code = "auto"
1855
1856# Whether to generate json code for the best individuals for the experiment.
1857# This python code contains the essential attributes from the internal DAI
1858# individual class. Reading the json code as a recipe is not supported.
1859# By default, 'auto' means off.
1860#
1861#make_json_code = "auto"
1862
1863# Maximum number of genes to make for example auto-generated custom individual,
1864# called example_indiv.py in the summary zip file.
1865#
1866#python_code_ngenes_max = 100
1867
1868# Minimum number of genes to make for example auto-generated custom individual,
1869# called example_indiv.py in the summary zip file.
1870#
1871#python_code_ngenes_min = 100
1872
1873# Select the scorer to optimize the binary probability threshold that is being used in related Confusion Matrix based scorers that are trivial to optimize otherwise: Precision, Recall, FalsePositiveRate, FalseDiscoveryRate, FalseOmissionRate, TrueNegativeRate, FalseNegativeRate, NegativePredictiveValue. Use F1 if the target class matters more, and MCC if all classes are equally important. AUTO will try to sync the threshold scorer with the scorer used for the experiment, otherwise falls back to F1. The optimized threshold is also used for creating labels in addition to probabilities in MOJO/Python scorers.
1874#threshold_scorer = "AUTO"
1875
1876# Auxiliary to included_scorers
1877#excluded_scorers = "[]"
1878
1879# Whether to enable constant models ('auto'/'on'/'off')
1880#enable_constant_model = "auto"
1881
1882# Whether to enable Decision Tree models ('auto'/'on'/'off'). 'auto' disables decision tree unless only non-constant model chosen.
1883#enable_decision_tree = "auto"
1884
1885# Whether to enable GLM models ('auto'/'on'/'off')
1886#enable_glm = "auto"
1887
1888# Whether to enable XGBoost GBM models ('auto'/'on'/'off')
1889#enable_xgboost_gbm = "auto"
1890
1891# Whether to enable LightGBM models ('auto'/'on'/'off')
1892#enable_lightgbm = "auto"
1893
1894# [DEPRECATED] Whether to enable TensorFlow models ('auto'/'on'/'off')
1895#enable_tensorflow = "auto"
1896
1897# Whether to enable PyTorch-based GrowNet models ('auto'/'on'/'off')
1898#enable_grownet = "auto"
1899
1900# Whether to enable FTRL support (follow the regularized leader) model ('auto'/'on'/'off')
1901#enable_ftrl = "auto"
1902
1903# Whether to enable RuleFit support (beta version, no mojo) ('auto'/'on'/'off')
1904#enable_rulefit = "auto"
1905
1906# Whether to enable automatic addition of zero-inflated models for regression problems with zero-inflated target values that meet certain conditions: y >= 0, y.std() > y.mean()
1907#enable_zero_inflated_models = "auto"
1908
1909# Whether to use dask_cudf even for 1 GPU. If False, will use plain cudf.
1910#use_dask_for_1_gpu = false
1911
1912# Number of retrials for dask fit to protect against known xgboost issues https://github.com/dmlc/xgboost/issues/6272 https://github.com/dmlc/xgboost/issues/6551
1913#dask_retrials_allreduce_empty_issue = 5
1914
1915# Whether to enable XGBoost RF mode without early stopping.
1916# Disabled unless switched on.
1917#
1918#enable_xgboost_rf = "auto"
1919
1920# Whether to enable dask_cudf (multi-GPU) version of XGBoost GBM/RF.
1921# Disabled unless switched on.
1922# Only applicable for single final model without early stopping. No Shapley possible.
1923#
1924#enable_xgboost_gbm_dask = "auto"
1925
1926# Whether to enable multi-node LightGBM.
1927# Disabled unless switched on.
1928#
1929#enable_lightgbm_dask = "auto"
1930
1931# If num_inner_hyperopt_trials_prefinal > 0,
1932# then whether to do hyper parameter tuning during leakage/shift detection.
1933# Might be useful to find non-trivial leakage/shift, but usually not necessary.
1934#
1935#hyperopt_shift_leak = false
1936
1937# If num_inner_hyperopt_trials_prefinal > 0,
1938# then whether to do hyper parameter tuning during leakage/shift detection,
1939# when checking each column.
1940#
1941#hyperopt_shift_leak_per_column = false
1942
1943# Number of trials for Optuna hyperparameter optimization for tuning and evolution models.
1944# 0 means no trials.
1945# For small data, 100 is ok choice,
1946# while for larger data smaller values are reasonable if need results quickly.
1947# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1948# Currently applies to XGBoost GBM/Dart and LightGBM.
1949# Useful when there is high overhead of DAI outside inner model fit/predict,
1950# so this tunes without that overhead.
1951# However, can overfit on a single fold when doing tuning or evolution,
1952# and if using CV then averaging the fold hyperparameters can lead to unexpected results.
1953#
1954#num_inner_hyperopt_trials_prefinal = 0
1955
1956# Number of trials for Optuna hyperparameter optimization for final models.
1957# 0 means no trials.
1958# For small data, 100 is ok choice,
1959# while for larger data smaller values are reasonable if need results quickly.
1960# Applies to final model only even if num_inner_hyperopt_trials=0.
1961# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1962# Currently applies to XGBoost GBM/Dart and LightGBM.
1963# Useful when there is high overhead of DAI outside inner model fit/predict,
1964# so this tunes without that overhead.
1965# However, for final model each fold is independently optimized and can overfit on each fold,
1966# after which predictions are averaged
1967# (so no issue with averaging hyperparameters when doing CV with tuning or evolution).
1968#
1969#num_inner_hyperopt_trials_final = 0
1970
1971# Number of individuals in final model (all folds/repeats for given base model) to
1972# optimize with Optuna hyperparameter tuning.
1973# -1 means all.
1974# 0 is same as choosing no Optuna trials.
1975# Might be only beneficial to optimize hyperparameters of best individual (i.e. value of 1) in ensemble.
1976#
1977#num_hyperopt_individuals_final = -1
1978
1979# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks). To disable choose None.
1980#optuna_pruner = "MedianPruner"
1981
1982# Set Optuna constructor arguments for particular applicable pruners.
1983# https://optuna.readthedocs.io/en/stable/reference/pruners.html
1984#
1985#optuna_pruner_kwargs = "{'n_startup_trials': 5, 'n_warmup_steps': 20, 'interval_steps': 20, 'percentile': 25.0, 'min_resource': 'auto', 'max_resource': 'auto', 'reduction_factor': 4, 'min_early_stopping_rate': 0, 'n_brackets': 4, 'min_early_stopping_rate_low': 0, 'upper': 1.0, 'lower': 0.0}"
1986
1987# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).
1988#optuna_sampler = "TPESampler"
1989
1990# Set Optuna constructor arguments for particular applicable samplers.
1991# https://optuna.readthedocs.io/en/stable/reference/samplers.html
1992#
1993#optuna_sampler_kwargs = "{}"
1994
1995# Whether to enable Optuna's XGBoost Pruning callback to abort unpromising runs. Not done if tuning learning rate.
1996#enable_xgboost_hyperopt_callback = true
1997
1998# Whether to enable Optuna's LightGBM Pruning callback to abort unpromising runs. Not done if tuning learning rate.
1999#enable_lightgbm_hyperopt_callback = true
2000
2001# Whether to enable XGBoost Dart models ('auto'/'on'/'off')
2002#enable_xgboost_dart = "auto"
2003
2004# Whether to enable dask_cudf (multi-GPU) version of XGBoost Dart.
2005# Disabled unless switched on.
2006# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
2007# Only applicable for single final model without early stopping. No Shapley possible.
2008#
2009#enable_xgboost_dart_dask = "auto"
2010
2011# Whether to enable dask_cudf (multi-GPU) version of XGBoost RF.
2012# Disabled unless switched on.
2013# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
2014# Only applicable for single final model without early stopping. No Shapley possible.
2015#
2016#enable_xgboost_rf_dask = "auto"
2017
2018# Number of GPUs to use per model hyperopt training task. Set to -1 for all GPUs.
2019# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model across a Dask cluster.
2020# Ignored if GPUs disabled or no GPUs on system.
2021# In multinode context, this refers to the per-node value.
2022#
2023#num_gpus_per_hyperopt_dask = -1
2024
2025# Whether to use (and expect exists) xgbfi feature interactions for xgboost.
2026#use_xgboost_xgbfi = false
2027
2028# Which boosting types to enable for LightGBM (gbdt = boosted trees, rf_early_stopping = random forest with early stopping rf = random forest (no early stopping), dart = drop-out boosted trees with no early stopping
2029#enable_lightgbm_boosting_types = "['gbdt']"
2030
2031# Whether to enable automatic class weighting for imbalanced multiclass problems. Can make worse probabilities, but improve confusion-matrix based scorers for rare classes without the need to manually calibrate probabilities or fine-tune the label creation process.
2032#enable_lightgbm_multiclass_balancing = "auto"
2033
2034# Whether to enable LightGBM categorical feature support (runs in CPU mode even if GPUs enabled, and no MOJO built)
2035#enable_lightgbm_cat_support = false
2036
2037# Whether to enable LightGBM linear_tree handling
2038# (only CPU mode currently, no L1 regularization -- mae objective, and no MOJO build).
2039#
2040#enable_lightgbm_linear_tree = false
2041
2042# Whether to enable LightGBM extra trees mode to help avoid overfitting
2043#enable_lightgbm_extra_trees = false
2044
2045# basic: as fast as when no constraints applied, but over-constrains the predictions.
2046# intermediate: very slightly slower, but much less constraining while still holding monotonicity and should be more accurate than basic.
2047# advanced: slower, but even more accurate than intermediate.
2048#
2049#lightgbm_monotone_constraints_method = "intermediate"
2050
2051# Forbids any monotone splits on the first x (rounded down) level(s) of the tree.
2052# The penalty applied to monotone splits on a given depth is a continuous,
2053# increasing function the penalization parameter.
2054# https://lightgbm.readthedocs.io/en/latest/Parameters.html#monotone_penalty
2055#
2056#lightgbm_monotone_penalty = 0.0
2057
2058# Whether to enable LightGBM CUDA implementation instead of OpenCL.
2059# CUDA with LightGBM only supported for Pascal+ (compute capability >=6.0)
2060#enable_lightgbm_cuda_support = false
2061
2062# Whether to show constant models in iteration panel even when not best model.
2063#show_constant_model = false
2064
2065#drop_constant_model_final_ensemble = true
2066
2067#xgboost_rf_exact_threshold_num_rows_x_cols = 10000
2068
2069# Select objectives allowed for XGBoost.
2070# Added to allowed mutations (the default reg:squarederror is in sample list 3 times)
2071# Note: logistic, tweedie, gamma, poisson are only valid for targets with positive values.
2072# Note: The objective relates to the form of the (regularized) loss function,
2073# used to determine the split with maximum information gain,
2074# while the metric is the non-regularized metric
2075# measured on the validation set (external or internally generated by DAI).
2076#
2077#xgboost_reg_objectives = "['reg:squarederror']"
2078
2079# Select metrics allowed for XGBoost.
2080# Added to allowed mutations (the default rmse and mae are in sample list twice).
2081# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2082#
2083#xgboost_reg_metrics = "['rmse', 'mae']"
2084
2085# Select which objectives allowed for XGBoost.
2086# Added to allowed mutations (all evenly sampled).
2087#xgboost_binary_metrics = "['logloss', 'auc', 'aucpr', 'error']"
2088
2089# Select objectives allowed for LightGBM.
2090# Added to allowed mutations (the default mse is in sample list 2 times if selected).
2091# "binary" refers to logistic regression.
2092# Note: If choose quantile/huber or fair and data is not normalized,
2093# recommendation is to use params_lightgbm to specify reasonable
2094# value of alpha (for quantile or huber) or fairc (for fair) to LightGBM.
2095# Note: mse is same as rmse correponding to L2 loss. mae is L1 loss.
2096# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2097# Note: The objective relates to the form of the (regularized) loss function,
2098# used to determine the split with maximum information gain,
2099# while the metric is the non-regularized metric
2100# measured on the validation set (external or internally generated by DAI).
2101#
2102#lightgbm_reg_objectives = "['mse', 'mae']"
2103
2104# Select metrics allowed for LightGBM.
2105# Added to allowed mutations (the default rmse is in sample list three times if selected).
2106# Note: If choose huber or fair and data is not normalized,
2107# recommendation is to use params_lightgbm to specify reasonable
2108# value of alpha (for huber or quantile) or fairc (for fair) to LightGBM.
2109# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2110#
2111#lightgbm_reg_metrics = "['rmse', 'mse', 'mae']"
2112
2113# Select objectives allowed for LightGBM.
2114# Added to allowed mutations (the default binary is in sample list 2 times if selected)
2115#lightgbm_binary_objectives = "['binary', 'xentropy']"
2116
2117# Select which binary metrics allowed for LightGBM.
2118# Added to allowed mutations (all evenly sampled).
2119#lightgbm_binary_metrics = "['binary', 'binary', 'auc']"
2120
2121# Select which metrics allowed for multiclass LightGBM.
2122# Added to allowed mutations (evenly sampled if selected).
2123#lightgbm_multi_metrics = "['multiclass', 'multi_error']"
2124
2125# tweedie_variance_power parameters to try for XGBoostModel and LightGBMModel if tweedie is used.
2126# First value is default.
2127#tweedie_variance_power_list = "[1.5, 1.2, 1.9]"
2128
2129# huber parameters to try for LightGBMModel if huber is used.
2130# First value is default.
2131#huber_alpha_list = "[0.9, 0.3, 0.5, 0.6, 0.7, 0.8, 0.1, 0.99]"
2132
2133# fair c parameters to try for LightGBMModel if fair is used.
2134# First value is default.
2135#fair_c_list = "[1.0, 0.1, 0.5, 0.9]"
2136
2137# poisson max_delta_step parameters to try for LightGBMModel if poisson is used.
2138# First value is default.
2139#poisson_max_delta_step_list = "[0.7, 0.9, 0.5, 0.2]"
2140
2141# quantile alpha parameters to try for LightGBMModel if quantile is used.
2142# First value is default.
2143#quantile_alpha = "[0.9, 0.95, 0.99, 0.6]"
2144
2145# Default reg_lambda regularization for GLM.
2146#reg_lambda_glm_default = 0.0004
2147
2148#lossguide_drop_factor = 4.0
2149
2150#lossguide_max_depth_extend_factor = 8.0
2151
2152# Parameters for LightGBM to override DAI parameters
2153# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2154# e.g. ``params_lightgbm="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2155# e.g. ``params_lightgbm="{'n_estimators': 600, 'learning_rate': 0.1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'binary', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2156# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2157# also likely should avoid parameters like: 'objective': 'binary', unless one really knows what one is doing (e.g. alternative objectives)
2158# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2159# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2160# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2161# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2162#params_lightgbm = "{}"
2163
2164# Parameters for XGBoost to override DAI parameters
2165# similar parameters as LightGBM since LightGBM parameters are transcribed from XGBoost equivalent versions
2166# e.g. ``params_xgboost="{'n_estimators': 100, 'max_leaves': 64, 'max_depth': 0, 'random_state': 1234}"``
2167# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2168#params_xgboost = "{}"
2169
2170# Like params_xgboost but for XGBoost random forest.
2171#params_xgboost_rf = "{}"
2172
2173# Like params_xgboost but for XGBoost's dart method
2174#params_dart = "{}"
2175
2176# [DEPRECATED] Parameters for TensorFlow to override DAI parameters
2177# e.g. ``params_tensorflow="{'lr': 0.01, 'add_wide': False, 'add_attention': True, 'epochs': 30, 'layers': (100, 100), 'activation': 'selu', 'batch_size': 64, 'chunk_size': 1000, 'dropout': 0.3, 'strategy': '1cycle', 'l1': 0.0, 'l2': 0.0, 'ort_loss': 0.5, 'ort_loss_tau': 0.01, 'normalize_type': 'streaming'}"``
2178# See: https://keras.io/ , e.g. for activations: https://keras.io/activations/
2179# Example layers: ``(500, 500, 500), (100, 100, 100), (100, 100), (50, 50)``
2180# Strategies: ``'1cycle'`` or ``'one_shot'``, See: https://github.com/fastai/fastai
2181# 'one_shot" is not allowed for ensembles.
2182# normalize_type: 'streaming' or 'global' (using sklearn StandardScaler)
2183#
2184#params_tensorflow = "{}"
2185
2186# Parameters for XGBoost's gblinear to override DAI parameters
2187# e.g. ``params_gblinear="{'n_estimators': 100}"``
2188# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2189#params_gblinear = "{}"
2190
2191# Parameters for Decision Tree to override DAI parameters
2192# parameters should be given as XGBoost equivalent unless unique LightGBM parameter
2193# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2194# e.g. ``params_decision_tree="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2195# e.g. ``params_decision_tree="{'n_estimators': 1, 'learning_rate': 1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'logloss', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2196# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2197# also likely should avoid parameters like: ``'objective': 'binary:logistic'``, unless one really knows what one is doing (e.g. alternative objectives)
2198# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2199# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2200# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2201# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2202#params_decision_tree = "{}"
2203
2204# Parameters for Rulefit to override DAI parameters
2205# e.g. ``params_rulefit="{'max_leaves': 64}"``
2206# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2207#params_rulefit = "{}"
2208
2209# Parameters for FTRL to override DAI parameters
2210#params_ftrl = "{}"
2211
2212# Parameters for GrowNet to override DAI parameters
2213#params_grownet = "{}"
2214
2215# How to handle tomls like params_tune_lightgbm.
2216# override: For any key in the params_tune_ toml dict, use the list of values instead of DAI's list of values.
2217# override_and_first_as_default: like override, but also use first entry in tuple/list (if present) as override as replacement for (e.g.) params_lightgbm when using params_tune_lightgbm.
2218# exclusive: Only tune the keys in the params_tune_ toml dict, unless no keys are present. Otherwise use DAI's default values.
2219# exclusive_and_first_as_default: Like exclusive but same first as default behavior as override_and_first_as_default.
2220# In order to fully control hyperparameter tuning, either one should set "override" mode and include every hyperparameter and at least one value in each list within the dictionary, or choose "exclusive" and then rely upon DAI unchanging default values for any keys not given.
2221# For custom recipes, one can use recipe_dict to pass hyperparameters and if using the "get_one()" function in a custom recipe, and if user_tune passed contains the hyperparameter dictionary equivalent of params_tune_ tomls, then this params_tune_mode will also work for custom recipes.
2222#params_tune_mode = "override_and_first_as_default"
2223
2224# Whether to adjust GBM trees, learning rate, and early_stopping_rounds for GBM models or recipes with _is_gbm=True.
2225# True: auto mode, that changes trees/LR/stopping if tune_learning_rate=false and early stopping is supported by the model and model is GBM or from custom individual with parameter in adjusted_params.
2226# False: disable any adjusting from tuning-evolution into final model.
2227# Setting this to false is required if (e.g.) one changes params_lightgbm or params_tune_lightgbm and wanted to preserve the tuning-evolution values into the final model.
2228# One should also set tune_learning_rate to true to tune the learning_rate, else it will be fixed to some single value.
2229#params_final_auto_adjust = true
2230
2231# Dictionary of key:lists of values to use for LightGBM tuning, overrides DAI's choice per key
2232# e.g. ``params_tune_lightgbm="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2233#params_tune_lightgbm = "{}"
2234
2235# Like params_tune_lightgbm but for XGBoost
2236# e.g. ``params_tune_xgboost="{'max_leaves': [8, 16, 32, 64]}"``
2237#params_tune_xgboost = "{}"
2238
2239# Like params_tune_lightgbm but for XGBoost random forest
2240# e.g. ``params_tune_xgboost_rf="{'max_leaves': [8, 16, 32, 64]}"``
2241#params_tune_xgboost_rf = "{}"
2242
2243# Dictionary of key:lists of values to use for LightGBM Decision Tree tuning, overrides DAI's choice per key
2244# e.g. ``params_tune_decision_tree="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2245#params_tune_decision_tree = "{}"
2246
2247# Like params_tune_lightgbm but for XGBoost's Dart
2248# e.g. ``params_tune_dart="{'max_leaves': [8, 16, 32, 64]}"``
2249#params_tune_dart = "{}"
2250
2251# [DEPRECATED] Like params_tune_lightgbm but for TensorFlow
2252# e.g. ``params_tune_tensorflow="{'layers': [(10,10,10), (10, 10, 10, 10)]}"``
2253#params_tune_tensorflow = "{}"
2254
2255# Like params_tune_lightgbm but for gblinear
2256# e.g. ``params_tune_gblinear="{'reg_lambda': [.01, .001, .0001, .0002]}"``
2257#params_tune_gblinear = "{}"
2258
2259# Like params_tune_lightgbm but for rulefit
2260# e.g. ``params_tune_rulefit="{'max_depth': [4, 5, 6]}"``
2261#params_tune_rulefit = "{}"
2262
2263# Like params_tune_lightgbm but for ftrl
2264#params_tune_ftrl = "{}"
2265
2266# Like params_tune_lightgbm but for GrowNet
2267# e.g. ``params_tune_grownet="{'input_dropout': [0.2, 0.5]}"``
2268#params_tune_grownet = "{}"
2269
2270# Whether to force max_leaves and max_depth to be 0 if grow_policy is depthwise and lossguide, respectively.
2271#params_tune_grow_policy_simple_trees = true
2272
2273# Maximum number of GBM trees or GLM iterations. Can be reduced for lower accuracy and/or higher interpretability.
2274# Early-stopping usually chooses less. Ignored if fixed_max_nestimators is > 0.
2275#
2276#max_nestimators = 3000
2277
2278# Fixed maximum number of GBM trees or GLM iterations. If > 0, ignores max_nestimators and disables automatic reduction
2279# due to lower accuracy or higher interpretability. Early-stopping usually chooses less.
2280#
2281#fixed_max_nestimators = -1
2282
2283# LightGBM dart mode and normal rf mode do not use early stopping,
2284# and they will sample from these values for n_estimators.
2285# XGBoost Dart mode will also sample from these n_estimators.
2286# Also applies to XGBoost Dask models that do not yet support early stopping or callbacks.
2287# For default parameters it chooses first value in list, while mutations sample from the list.
2288#
2289#n_estimators_list_no_early_stopping = "[50, 100, 150, 200, 250, 300]"
2290
2291# Lower limit on learning rate for final ensemble GBM models.
2292# In some cases, the maximum number of trees/iterations is insufficient for the final learning rate,
2293# which can lead to no early stopping triggered and poor final model performance.
2294# Then, one can try increasing the learning rate by raising this minimum,
2295# or one can try increasing the maximum number of trees/iterations.
2296#
2297#min_learning_rate_final = 0.01
2298
2299# Upper limit on learning rate for final ensemble GBM models
2300#max_learning_rate_final = 0.05
2301
2302# factor by which max_nestimators is reduced for tuning and feature evolution
2303#max_nestimators_feature_evolution_factor = 0.2
2304
2305# Lower limit on learning rate for feature engineering GBM models
2306#min_learning_rate = 0.05
2307
2308# Upper limit on learning rate for GBM models
2309# If want to override min_learning_rate and min_learning_rate_final, set this to smaller value
2310#
2311#max_learning_rate = 0.5
2312
2313# Whether to lock learning rate, tree count, early stopping rounds for GBM algorithms to the final model values.
2314#lock_ga_to_final_trees = false
2315
2316# Whether to tune learning rate for GBM algorithms (if not doing just single final model).
2317# If tuning with Optuna, might help isolate optimal learning rate.
2318#
2319#tune_learning_rate = false
2320
2321# Max. number of epochs for FTRL models
2322#max_epochs = 50
2323
2324# [DEPRECATED] Number of epochs for TensorFlow when larger data size.
2325#max_epochs_tf_big_data = 5
2326
2327# Maximum tree depth (and corresponding max max_leaves as 2**max_max_depth)
2328#max_max_depth = 12
2329
2330# Default max_bin for tree methods
2331#default_max_bin = 256
2332
2333# Default max_bin for LightGBM (64 recommended for GPU LightGBM for speed)
2334#default_lightgbm_max_bin = 249
2335
2336# Maximum max_bin for tree features
2337#max_max_bin = 256
2338
2339# Minimum max_bin for any tree
2340#min_max_bin = 32
2341
2342# Amount of memory which can handle max_bin = 256 can handle 125 columns and max_bin = 32 for 1000 columns
2343# As available memory on system goes higher than this scale, can handle proportionally more columns at higher max_bin
2344# Currently set to 10GB
2345#scale_mem_for_max_bin = 10737418240
2346
2347# Factor by which rf gets more depth than gbdt
2348#factor_rf = 1.25
2349
2350# For Pytorch Image fitting including both models and transformers. See also max_fit_cores for all models.
2351#image_max_cores = 4
2352
2353# [DEPRECATED] Whether TensorFlow will use all CPU cores, or if it will split among all transformers. Only for transformers, not TensorFlow model.
2354#tensorflow_use_all_cores = true
2355
2356# [DEPRECATED] Whether TensorFlow will use all CPU cores if reproducible is set, or if it will split among all transformers
2357#tensorflow_use_all_cores_even_if_reproducible_true = false
2358
2359# [DEPRECATED] Whether to disable TensorFlow memory optimizations. Can help fix tensorflow.python.framework.errors_impl.AlreadyExistsError
2360#tensorflow_disable_memory_optimization = true
2361
2362# [DEPRECATED] How many cores to use for each TensorFlow model, regardless if GPU or CPU based (0 = auto mode)
2363#tensorflow_cores = 0
2364
2365# [DEPRECATED] For TensorFlow models, maximum number of cores to use if tensorflow_cores=0 (auto mode), because TensorFlow model is inefficient at using many cores. See also max_fit_cores for all models.
2366#tensorflow_model_max_cores = 4
2367
2368# How many cores to use for each Bert Model and Transformer, regardless if GPU or CPU based (0 = auto mode)
2369#bert_cores = 0
2370
2371# Whether Bert will use all CPU cores, or if it will split among all transformers. Only for transformers, not Bert model.
2372#bert_use_all_cores = true
2373
2374# For Bert models, maximum number of cores to use if bert_cores=0 (auto mode), because Bert model is inefficient at using many cores. See also max_fit_cores for all models.
2375#bert_model_max_cores = 8
2376
2377# Max number of rules to be used for RuleFit models (-1 for all)
2378#rulefit_max_num_rules = -1
2379
2380# Max tree depth for RuleFit models
2381#rulefit_max_tree_depth = 6
2382
2383# Max number of trees for RuleFit models
2384#rulefit_max_num_trees = 500
2385
2386# Enable One-Hot-Encoding (which does binning to limit to number of bins to no more than 100 anyway) for categorical columns with fewer than this many unique values
2387# Set to 0 to disable
2388#one_hot_encoding_cardinality_threshold = 50
2389
2390# How many levels to choose one-hot by default instead of other encodings, restricted down to 10x less (down to 2 levels) when number of columns able to be used with OHE exceeds 500. Note the total number of bins is reduced if bigger data independently of this.
2391#one_hot_encoding_cardinality_threshold_default_use = 40
2392
2393# Treat text columns also as categorical columns if the cardinality is <= this value.
2394# Set to 0 to treat text columns only as text.
2395#text_as_categorical_cardinality_threshold = 1000
2396
2397# If num_as_cat is true, then treat numeric columns also as categorical columns if the cardinality is > this value.
2398# Setting to 0 allows all numeric to be treated as categorical if num_as_cat is True.
2399#numeric_as_categorical_cardinality_threshold = 2
2400
2401# If num_as_cat is true, then treat numeric columns also as categorical columns to possibly one-hot encode if the cardinality is > this value.
2402# Setting to 0 allows all numeric to be treated as categorical to possibly ohe-hot encode if num_as_cat is True.
2403#numeric_as_ohe_categorical_cardinality_threshold = 2
2404
2405#one_hot_encoding_show_actual_levels_in_features = false
2406
2407# Fixed ensemble_level
2408# -1 = auto, based upon ensemble_accuracy_switch, accuracy, size of data, etc.
2409# 0 = No ensemble, only final single model on validated iteration/tree count
2410# 1 = 1 model, multiple ensemble folds (cross-validation)
2411# >=2 = >=2 models, multiple ensemble folds (cross-validation)
2412#
2413#fixed_ensemble_level = -1
2414
2415# If enabled, use cross-validation to determine optimal parameters for single final model,
2416# and to be able to create training holdout predictions.
2417#cross_validate_single_final_model = true
2418
2419# Model to combine base model predictions, for experiments that create a final pipeline
2420# consisting of multiple base models.
2421# blender: Creates a linear blend with non-negative weights that add to 1 (blending) - recommended
2422# extra_trees: Creates a tree model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2423# neural_net: Creates a neural net model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2424#
2425#ensemble_meta_learner = "blender"
2426
2427# If enabled, use cross-validation to create an ensemble for the meta learner itself. Especially recommended for
2428# ``ensemble_meta_learner='extra_trees'``, to make unbiased training holdout predictions.
2429# Will disable MOJO if enabled. Not needed for ``ensemble_meta_learner='blender'``."
2430#
2431#cross_validate_meta_learner = false
2432
2433# Number of models to tune during pre-evolution phase
2434# Can make this lower to avoid excessive tuning, or make higher to do enhanced tuning.
2435# ``-1 : auto``
2436#
2437#parameter_tuning_num_models = -1
2438
2439# Number of models (out of all parameter_tuning_num_models) to have as SEQUENCE instead of random features/parameters.
2440# ``-1 : auto, use at least one default individual per model class tuned``
2441#
2442#parameter_tuning_num_models_sequence = -1
2443
2444# Number of models to add during tuning that cover other cases, like for TS having no TE on time column groups.
2445# ``-1 : auto, adds additional models to protect against overfit on high-gain training features.``
2446#
2447#parameter_tuning_num_models_extra = -1
2448
2449# Dictionary of model class name (keys) and number (values) of instances.
2450#num_tuning_instances = "{}"
2451
2452#validate_meta_learner = true
2453
2454#validate_meta_learner_extra = false
2455
2456# Specify the fixed number of cross-validation folds (if >= 2) for feature evolution. (The actual number of splits allowed can be less and is determined at experiment run-time).
2457#fixed_num_folds_evolution = -1
2458
2459# Specify the fixed number of cross-validation folds (if >= 2) for the final model. (The actual number of splits allowed can be less and is determined at experiment run-time).
2460#fixed_num_folds = -1
2461
2462# set "on" to force only first fold for models - useful for quick runs regardless of data
2463#fixed_only_first_fold_model = "auto"
2464
2465# Set the number of repeated cross-validation folds for feature evolution and final models (if > 0), 0 is default. Only for ensembles that do cross-validation (so no external validation and not time-series), not for single final models.
2466#fixed_fold_reps = 0
2467
2468#num_fold_ids_show = 10
2469
2470#fold_scores_instability_warning_threshold = 0.25
2471
2472# Upper limit on the number of rows x number of columns for feature evolution (applies to both training and validation/holdout splits)
2473# feature evolution is the process that determines which features will be derived.
2474# Depending on accuracy settings, a fraction of this value will be used
2475#
2476#feature_evolution_data_size = 300000000
2477
2478# Upper limit on the number of rows x number of columns for training final pipeline.
2479#
2480#final_pipeline_data_size = 1000000000
2481
2482# Whether to automatically limit validation data size using feature_evolution_data_size (giving max_rows_feature_evolution shown in logs) for tuning-evolution, and using final_pipeline_data_size, max_validation_to_training_size_ratio_for_final_ensemble for final model.
2483#limit_validation_size = true
2484
2485# Smaller values can speed up final pipeline model training, as validation data is only used for early stopping.
2486# Note that final model predictions and scores will always be provided on the full dataset provided.
2487#
2488#max_validation_to_training_size_ratio_for_final_ensemble = 2.0
2489
2490# Ratio of minority to majority class of the target column beyond which stratified sampling is done for binary classification. Otherwise perform random sampling. Set to 0 to always do random sampling. Set to 1 to always do stratified sampling.
2491#force_stratified_splits_for_imbalanced_threshold_binary = 0.01
2492
2493#force_stratified_splits_for_binary_max_rows = 1000000
2494
2495# Specify whether to do stratified sampling for validation fold creation for iid regression problems. Otherwise perform random sampling.
2496#stratify_for_regression = true
2497
2498# Sampling method for imbalanced binary classification problems. Choices are:
2499# "auto": sample both classes as needed, depending on data
2500# "over_under_sampling": over-sample the minority class and under-sample the majority class, depending on data
2501# "under_sampling": under-sample the majority class to reach class balance
2502# "off": do not perform any sampling
2503#
2504#imbalance_sampling_method = "off"
2505
2506# For smaller data, there's no generally no benefit in using imbalanced sampling methods.
2507#imbalance_sampling_threshold_min_rows_original = 100000
2508
2509# For imbalanced binary classification: ratio of majority to minority class equal and above which to enable
2510# special imbalanced models with sampling techniques (specified by imbalance_sampling_method) to attempt to improve model performance.
2511#
2512#imbalance_ratio_sampling_threshold = 5
2513
2514# For heavily imbalanced binary classification: ratio of majority to minority class equal and above which to enable only
2515# special imbalanced models on full original data, without upfront sampling.
2516#
2517#heavy_imbalance_ratio_sampling_threshold = 25
2518
2519# Special handling can include special models, special scorers, special feature engineering.
2520#
2521#imbalance_ratio_multiclass_threshold = 5
2522
2523# Special handling can include special models, special scorers, special feature engineering.
2524#
2525#heavy_imbalance_ratio_multiclass_threshold = 25
2526
2527# -1: automatic
2528#imbalance_sampling_number_of_bags = -1
2529
2530# -1: automatic
2531#imbalance_sampling_max_number_of_bags = 10
2532
2533# Only for shift/leakage/tuning/feature evolution models. Not used for final models. Final models can
2534# be limited by imbalance_sampling_max_number_of_bags.
2535#imbalance_sampling_max_number_of_bags_feature_evolution = 3
2536
2537# Max. size of data sampled during imbalanced sampling (in terms of dataset size),
2538# controls number of bags (approximately). Only for imbalance_sampling_number_of_bags == -1.
2539#imbalance_sampling_max_multiple_data_size = 1.0
2540
2541# Rank averaging can be helpful when ensembling diverse models when ranking metrics like AUC/Gini
2542# metrics are optimized. No MOJO support yet.
2543#imbalance_sampling_rank_averaging = "auto"
2544
2545# A value of 0.5 means that models/algorithms will be presented a balanced target class distribution
2546# after applying under/over-sampling techniques on the training data. Sometimes it makes sense to
2547# choose a smaller value like 0.1 or 0.01 when starting from an extremely imbalanced original target
2548# distribution. -1.0: automatic
2549#imbalance_sampling_target_minority_fraction = -1.0
2550
2551# For binary classification: ratio of majority to minority class equal and above which to notify
2552# of imbalance in GUI to say slightly imbalanced.
2553# More than ``imbalance_ratio_sampling_threshold`` will say problem is imbalanced.
2554#
2555#imbalance_ratio_notification_threshold = 2.0
2556
2557# List of possible bins for FTRL (largest is default best value)
2558#nbins_ftrl_list = "[1000000, 10000000, 100000000]"
2559
2560# Samples the number of automatic FTRL interactions terms to no more than this value (for each of 2nd, 3rd, 4th order terms)
2561#ftrl_max_interaction_terms_per_degree = 10000
2562
2563# List of possible bins for target encoding (first is default value)
2564#te_bin_list = "[25, 10, 100, 250]"
2565
2566# List of possible bins for weight of evidence encoding (first is default value)
2567# If only want one value: woe_bin_list = [2]
2568#woe_bin_list = "[25, 10, 100, 250]"
2569
2570# List of possible bins for ohe hot encoding (first is default value). If left as default, the actual list is changed for given data size and dials.
2571#ohe_bin_list = "[10, 25, 50, 75, 100]"
2572
2573# List of max possible number of bins for numeric binning (first is default value). If left as default, the actual list is changed for given data size and dials. The binner will automatically reduce the number of bins based on predictive power.
2574#binner_bin_list = "[5, 10, 20]"
2575
2576# If dataset has more columns, then will check only first such columns. Set to 0 to disable.
2577#drop_redundant_columns_limit = 1000
2578
2579# Whether to drop columns with constant values
2580#drop_constant_columns = true
2581
2582# Whether to detect duplicate rows in training, validation and testing datasets. Done after doing type detection and dropping of redundant or missing columns across datasets, just before the experiment starts, still before leakage detection. Any further dropping of columns can change the amount of duplicate rows. Informative only, if want to drop rows in training data, make sure to check the drop_duplicate_rows setting. Uses a sample size, given by detect_duplicate_rows_max_rows_x_cols.
2583#detect_duplicate_rows = true
2584
2585#drop_duplicate_rows_timeout = 60
2586
2587# Whether to drop duplicate rows in training data. Done at the start of Driverless AI, only considering columns to drop as given by the user, not considering validation or training datasets or leakage or redundant columns. Any further dropping of columns can change the amount of duplicate rows. Time limited by drop_duplicate_rows_timeout seconds.
2588# 'auto': "off""
2589# 'weight': If duplicates, then convert dropped duplicates into a weight column for training. Useful when duplicates are added to preserve some distribution of instances expected. Only allowed if no weight columnn is present, else duplicates are just dropped.
2590# 'drop': Drop any duplicates, keeping only first instances.
2591# 'off': Do not drop any duplicates. This may lead to over-estimation of accuracy.
2592#drop_duplicate_rows = "auto"
2593
2594# If > 0, then acts as sampling size for informative duplicate row detection. If set to 0, will do checks for all dataset sizes.
2595#detect_duplicate_rows_max_rows_x_cols = 10000000
2596
2597# Whether to drop columns that appear to be an ID
2598#drop_id_columns = true
2599
2600# Whether to avoid dropping any columns (original or derived)
2601#no_drop_features = false
2602
2603# Direct control over columns to drop in bulk so can copy-paste large lists instead of selecting each one separately in GUI
2604#cols_to_drop = "[]"
2605
2606#cols_to_drop_sanitized = "[]"
2607
2608# Control over columns to group by for CVCatNumEncode Transformer, default is empty list that means DAI automatically searches all columns,
2609# selected randomly or by which have top variable importance.
2610# The CVCatNumEncode Transformer takes a list of categoricals (or these cols_to_group_by) and uses those columns
2611# as new feature to perform aggregations on (agg_funcs_for_group_by).
2612#cols_to_group_by = "[]"
2613
2614#cols_to_group_by_sanitized = "[]"
2615
2616# Whether to sample from given features to group by (True) or to always group by all features (False) when using cols_to_group_by.
2617#sample_cols_to_group_by = false
2618
2619# Aggregation functions to use for groupby operations for CVCatNumEncode Transformer, see also cols_to_group_by and sample_cols_to_group_by.
2620#agg_funcs_for_group_by = "['mean', 'sd', 'min', 'max', 'count']"
2621
2622# Out of fold aggregations ensure less overfitting, but see less data in each fold. For controlling how many folds used by CVCatNumEncode Transformer.
2623#folds_for_group_by = 5
2624
2625# Control over columns to force-in. Forced-in features are are handled by the most interpretable transformer allowed by experiment
2626# options, and they are never removed (although model may assign 0 importance to them still).
2627# Transformers used by default include:
2628# OriginalTransformer for numeric,
2629# CatOriginalTransformer or FrequencyTransformer for categorical,
2630# TextOriginalTransformer for text,
2631# DateTimeOriginalTransformer for date-times,
2632# DateOriginalTransformer for dates,
2633# ImageOriginalTransformer, ImageVectorizerTransformer, ImageVectorizerV2Transformer for images,
2634# etc.
2635#cols_to_force_in = "[]"
2636
2637#cols_to_force_in_sanitized = "[]"
2638
2639# Strategy to apply when doing mutations on transformers.
2640# Sample mode is default, with tendency to sample transformer parameters.
2641# Batched mode tends to do multiple types of the same transformation together.
2642# Full mode does even more types of the same transformation together.
2643#
2644#mutation_mode = "sample"
2645
2646# 'baseline': Explore exemplar set of models with baselines as reference.
2647# 'random': Explore 10 random seeds for same setup. Useful since nature of genetic algorithm is noisy and repeats might get better results, or one can ensemble the custom individuals from such repeats.
2648# 'line': Explore good model with all features and original features with all models. Useful as first exploration.
2649# 'line_all': Like 'line', but enable all models and transformers possible instead of only what base experiment setup would have inferred.
2650# 'product': Explore one-by-one Cartesian product of each model and transformer. Useful for exhaustive exploration.
2651#leaderboard_mode = "baseline"
2652
2653# Controls whether users can launch an experiment in Leaderboard mode form the UI.
2654#leaderboard_off = false
2655
2656# Allows control over default accuracy knob setting.
2657# If default models are too complex, set to -1 or -2, etc.
2658# If default models are not accurate enough, set to 1 or 2, etc.
2659#
2660#default_knob_offset_accuracy = 0
2661
2662# Allows control over default time knob setting.
2663# If default experiments are too slow, set to -1 or -2, etc.
2664# If default experiments finish too fast, set to 1 or 2, etc.
2665#
2666#default_knob_offset_time = 0
2667
2668# Allows control over default interpretability knob setting.
2669# If default models are too simple, set to -1 or -2, etc.
2670# If default models are too complex, set to 1 or 2, etc.
2671#
2672#default_knob_offset_interpretability = 0
2673
2674# Whether to enable checking text for shift, currently only via label encoding.
2675#shift_check_text = false
2676
2677# Whether to use LightGBM random forest mode without early stopping for shift detection.
2678#use_rf_for_shift_if_have_lgbm = true
2679
2680# Normalized training variable importance above which to check the feature for shift
2681# Useful to avoid checking likely unimportant features
2682#shift_key_features_varimp = 0.01
2683
2684# Whether to only check certain features based upon the value of shift_key_features_varimp
2685#shift_check_reduced_features = true
2686
2687# Number of trees to use to train model to check shift in distribution
2688# No larger than max_nestimators
2689#shift_trees = 100
2690
2691# The value of max_bin to use for trees to use to train model to check shift in distribution
2692#shift_max_bin = 256
2693
2694# The min. value of max_depth to use for trees to use to train model to check shift in distribution
2695#shift_min_max_depth = 4
2696
2697# The max. value of max_depth to use for trees to use to train model to check shift in distribution
2698#shift_max_max_depth = 8
2699
2700# If distribution shift detection is enabled, show features for which shift AUC is above this value
2701# (AUC of a binary classifier that predicts whether given feature value belongs to train or test data)
2702#detect_features_distribution_shift_threshold_auc = 0.55
2703
2704# Minimum number of features to keep, keeping least shifted feature at least if 1
2705#drop_features_distribution_shift_min_features = 1
2706
2707# Shift beyond which shows HIGH notification, else MEDIUM
2708#shift_high_notification_level = 0.8
2709
2710# Whether to enable checking text for leakage, currently only via label encoding.
2711#leakage_check_text = true
2712
2713# Normalized training variable importance (per 1 minus AUC/R2 to control for leaky varimp dominance) above which to check the feature for leakage
2714# Useful to avoid checking likely unimportant features
2715#leakage_key_features_varimp = 0.001
2716
2717# Like leakage_key_features_varimp, but applies if early stopping disabled when can trust multiple leaks to get uniform varimp.
2718#leakage_key_features_varimp_if_no_early_stopping = 0.05
2719
2720# Whether to only check certain features based upon the value of leakage_key_features_varimp. If any feature has AUC near 1, will consume all variable importance, even if another feature is also leaky. So False is safest option, but True generally good if many columns.
2721#leakage_check_reduced_features = true
2722
2723# Whether to use LightGBM random forest mode without early stopping for leakage detection.
2724#use_rf_for_leakage_if_have_lgbm = true
2725
2726# Number of trees to use to train model to check for leakage
2727# No larger than max_nestimators
2728#leakage_trees = 100
2729
2730# The value of max_bin to use for trees to use to train model to check for leakage
2731#leakage_max_bin = 256
2732
2733# The value of max_depth to use for trees to use to train model to check for leakage
2734#leakage_min_max_depth = 6
2735
2736# The value of max_depth to use for trees to use to train model to check for leakage
2737#leakage_max_max_depth = 8
2738
2739# When leakage detection is enabled, if AUC (R2 for regression) on original data (label-encoded)
2740# is above or equal to this value, then trigger per-feature leakage detection
2741#
2742#detect_features_leakage_threshold_auc = 0.95
2743
2744# When leakage detection is enabled, show features for which AUC (R2 for regression,
2745# for whether that predictor/feature alone predicts the target) is above or equal to this value.
2746# Feature is dropped if AUC/R2 is above or equal to drop_features_leakage_threshold_auc
2747#
2748#detect_features_per_feature_leakage_threshold_auc = 0.8
2749
2750# Minimum number of features to keep, keeping least leakage feature at least if 1
2751#drop_features_leakage_min_features = 1
2752
2753# Ratio of train to validation holdout when testing for leakage
2754#leakage_train_test_split = 0.25
2755
2756# Whether to enable detailed traces (in GUI Trace)
2757#detailed_traces = false
2758
2759# Whether to enable debug log level (in log files)
2760#debug_log = false
2761
2762# Whether to add logging of system information such as CPU, GPU, disk space at the start of each experiment log. Same information is already logged in system logs.
2763#log_system_info_per_experiment = true
2764
2765#check_system = true
2766
2767#check_system_basic = true
2768
2769# How close to the optimal value (usually 1 or 0) does the validation score need to be to be considered perfect (to stop the experiment)?
2770#abs_tol_for_perfect_score = 0.0001
2771
2772# Timeout in seconds to wait for data ingestion.
2773#data_ingest_timeout = 86400.0
2774
2775# How many seconds to allow mutate to take, nominally only takes few seconds at most. But on busy system doing many individuals, might take longer. Optuna sometimes live lock hangs in scipy random distribution maker.
2776#mutate_timeout = 600
2777
2778# Whether to trust GPU locking for submission of GPU jobs to limit memory usage.
2779# If False, then wait for as GPU submissions to be less than number of GPUs,
2780# even if later jobs could be purely CPU jobs that did not need to wait.
2781# Only applicable if not restricting number of GPUs via num_gpus_per_experiment,
2782# else have to use resources instead of relying upon locking.
2783#
2784#gpu_locking_trust_pool_submission = true
2785
2786# Whether to steal GPU locks when process is neither on GPU PID list nor using CPU resources at all (e.g. sleeping). Only steal from multi-GPU locks that are incomplete. Prevents deadlocks in case multi-GPU model hangs.
2787#gpu_locking_free_dead = true
2788
2789#tensorflow_allow_cpu_only = false
2790
2791#check_pred_contribs_sum = false
2792
2793#debug_daimodel_level = 0
2794
2795#debug_debug_xgboost_splits = false
2796
2797#log_predict_info = true
2798
2799#log_fit_info = true
2800
2801# Amount of time to stall (in seconds) before killing the job (assumes it hung). Reference time is scaled by train data shape of rows * cols to get used stalled_time_kill
2802#stalled_time_kill_ref = 440.0
2803
2804# Amount of time between checks for some process taking long time, every cycle full process list will be dumped to console or experiment logs if possible.
2805#long_time_psdump = 1800
2806
2807# Whether to dump ps every long_time_psdump
2808#do_psdump = false
2809
2810# Whether to check every long_time_psdump seconds and SIGUSR1 to all children to see where maybe stuck or taking long time.
2811#livelock_signal = false
2812
2813# Value to override number of sockets, in case DAIs determination is wrong, for non-trivial systems. 0 means auto.
2814#num_cpu_sockets_override = 0
2815
2816# Value to override number of GPUs, in case DAIs determination is wrong, for non-trivial systems. -1 means auto.Can also set min_num_cores_per_gpu=-1 to allowany number of GPUs for each experiment regardlessof number of cores.
2817#num_gpus_override = -1
2818
2819# Whether to show GPU usage only when locking. 'auto' means 'on' if num_gpus_override is different than actual total visible GPUs, else it means 'off'
2820#show_gpu_usage_only_if_locked = "auto"
2821
2822# Show inapplicable models in preview, to be sure not missing models one could have used
2823#show_inapplicable_models_preview = false
2824
2825# Show inapplicable transformers in preview, to be sure not missing transformers one could have used
2826#show_inapplicable_transformers_preview = false
2827
2828# Show warnings for models (image auto, Dask multinode/multi-GPU) if conditions are met to use but not chosen to avoid missing models that could benefit accuracy/performance
2829#show_warnings_preview = false
2830
2831# Show warnings for models that have no transformers for certain features.
2832#show_warnings_preview_unused_map_features = true
2833
2834# Up to how many input features to determine, during GUI/client preview, unused features. Too many slows preview down.
2835#max_cols_show_unused_features = 1000
2836
2837# Up to how many input features to show transformers used for each input feature.
2838#max_cols_show_feature_transformer_mapping = 1000
2839
2840# Up to how many input features to show, in preview, that are unused features.
2841#warning_unused_feature_show_max = 3
2842
2843#interaction_finder_max_rows_x_cols = 200000.0
2844
2845#interaction_finder_corr_threshold = 0.95
2846
2847# Required GINI relative improvement for InteractionTransformer.
2848# If GINI is not better than this relative improvement compared to original features considered
2849# in the interaction, then the interaction is not returned. If noisy data, and no clear signal
2850# in interactions but still want interactions, then can decrease this number.
2851#interaction_finder_gini_rel_improvement_threshold = 0.5
2852
2853# Number of transformed Interactions to make as best out of many generated trial interactions.
2854#interaction_finder_return_limit = 5
2855
2856# Whether to enable bootstrap sampling. Provides error bars to validation and test scores based on the standard error of the bootstrap mean.
2857#enable_bootstrap = true
2858
2859# Minimum number of bootstrap samples to use for estimating score and its standard deviation
2860# Actual number of bootstrap samples will vary between the min and max,
2861# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2862#
2863#min_bootstrap_samples = 1
2864
2865# Maximum number of bootstrap samples to use for estimating score and its standard deviation
2866# Actual number of bootstrap samples will vary between the min and max,
2867# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2868#
2869#max_bootstrap_samples = 100
2870
2871# Minimum fraction of row size to take as sample size for bootstrap estimator
2872# Actual sample size used for bootstrap estimate will vary between the min and max,
2873# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2874#
2875#min_bootstrap_sample_size_factor = 1.0
2876
2877# Maximum fraction of row size to take as sample size for bootstrap estimator
2878# Actual sample size used for bootstrap estimate will vary between the min and max,
2879# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2880#
2881#max_bootstrap_sample_size_factor = 10.0
2882
2883# Seed to use for final model bootstrap sampling, -1 means use experiment-derived seed.
2884# E.g. one can retrain final model with different seed to get different final model error bars for scores.
2885#
2886#bootstrap_final_seed = -1
2887
2888# Benford's law: mean absolute deviance threshold equal and above which integer valued columns are treated as categoricals too
2889#benford_mad_threshold_int = 0.03
2890
2891# Benford's law: mean absolute deviance threshold equal and above which real valued columns are treated as categoricals too
2892#benford_mad_threshold_real = 0.1
2893
2894# Variable importance below which feature is dropped (with possible replacement found that is better)
2895# This also sets overall scale for lower interpretability settings.
2896# Set to lower value if ok with many weak features despite choosing high interpretability,
2897# or if see drop in performance due to the need for weak features.
2898#
2899#varimp_threshold_at_interpretability_10 = 0.001
2900
2901# Whether to avoid setting stabilize_varimp=false and stabilize_fs=false for time series experiments.
2902#allow_stabilize_varimp_for_ts = false
2903
2904# Variable importance is used by genetic algorithm to decide which features are useful,
2905# so this can stabilize the feature selection by the genetic algorithm.
2906# This is by default disabled for time series experiments, which can have real diverse behavior in each split.
2907# But in some cases feature selection is improved in presence of highly shifted variables that are not handled
2908# by lag transformers and one can set allow_stabilize_varimp_for_ts=true.
2909#
2910#stabilize_varimp = true
2911
2912# Whether to take minimum (True) or mean (False) of delta improvement in score when aggregating feature selection scores across multiple folds/depths.
2913# Delta improvement of score corresponds to original metric minus metric of shuffled feature frame if maximizing metric,
2914# and corresponds to negative of such a score difference if minimizing.
2915# Feature selection by permutation importance considers the change in score after shuffling a feature, and using minimum operation
2916# ignores optimistic scores in favor of pessimistic scores when aggregating over folds.
2917# Note, if using tree methods, multiple depths may be fitted, in which case regardless of this toml setting,
2918# only features that are kept for all depths are kept by feature selection.
2919# If interpretability >= config toml value of fs_data_vary_for_interpretability, then half data (or setting of fs_data_frac)
2920# is used as another fit, in which case regardless of this toml setting,
2921# only features that are kept for all data sizes are kept by feature selection.
2922# Note: This is disabled for small data since arbitrary slices of small data can lead to disjoint features being important and only aggregated average behavior has signal.
2923#
2924#stabilize_fs = true
2925
2926# Whether final pipeline uses fixed features for some transformers that would normally
2927# perform search, such as InteractionsTransformer.
2928# Use what learned from tuning and evolution (True) or to freshly search for new features (False).
2929# This can give a more stable pipeline, especially for small data or when using interaction transformer
2930# as pretransformer in multi-layer pipeline.
2931#
2932#stabilize_features = true
2933
2934#fraction_std_bootstrap_ladder_factor = 0.01
2935
2936#bootstrap_ladder_samples_limit = 10
2937
2938#features_allowed_by_interpretability = "{1: 10000000, 2: 10000, 3: 1000, 4: 500, 5: 300, 6: 200, 7: 150, 8: 100, 9: 80, 10: 50, 11: 50, 12: 50, 13: 50}"
2939
2940#nfeatures_max_threshold = 200
2941
2942#rdelta_percent_score_penalty_per_feature_by_interpretability = "{1: 0.0, 2: 0.1, 3: 1.0, 4: 2.0, 5: 5.0, 6: 10.0, 7: 20.0, 8: 30.0, 9: 50.0, 10: 100.0, 11: 100.0, 12: 100.0, 13: 100.0}"
2943
2944#drop_low_meta_weights = true
2945
2946#meta_weight_allowed_by_interpretability = "{1: 1E-7, 2: 1E-5, 3: 1E-4, 4: 1E-3, 5: 1E-2, 6: 0.03, 7: 0.05, 8: 0.08, 9: 0.10, 10: 0.15, 11: 0.15, 12: 0.15, 13: 0.15}"
2947
2948#meta_weight_allowed_for_reference = 1.0
2949
2950#feature_cost_mean_interp_for_penalty = 5
2951
2952#features_cost_per_interp = 0.25
2953
2954#varimp_threshold_shift_report = 0.3
2955
2956#apply_featuregene_limits_after_tuning = true
2957
2958#remove_scored_0gain_genes_in_postprocessing_above_interpretability = 13
2959
2960#remove_scored_0gain_genes_in_postprocessing_above_interpretability_final_population = 2
2961
2962#remove_scored_by_threshold_genes_in_postprocessing_above_interpretability_final_population = 7
2963
2964#show_full_pipeline_details = false
2965
2966#num_transformed_features_per_pipeline_show = 10
2967
2968#fs_data_vary_for_interpretability = 7
2969
2970#fs_data_frac = 0.5
2971
2972#many_columns_count = 400
2973
2974#columns_count_interpretable = 200
2975
2976#round_up_indivs_for_busy_gpus = true
2977
2978#tuning_share_varimp = "best"
2979
2980# Graphviz is an optional requirement for native installations (RPM/DEP/Tar-SH, outside of Docker)to convert .dot files into .png files for pipeline visualizations as part of experiment artifacts
2981#require_graphviz = true
2982
2983# Unnormalized probability to add genes or instances of transformers with specific attributes.
2984# If no genes can be added, other mutations
2985# (mutating models hyper parmaters, pruning genes, pruning features, etc.) are attempted.
2986#
2987#prob_add_genes = 0.5
2988
2989# Unnormalized probability, conditioned on prob_add_genes,
2990# to add genes or instances of transformers with specific attributes
2991# that have shown to be beneficial to other individuals within the population.
2992#
2993#prob_addbest_genes = 0.5
2994
2995# Unnormalized probability to prune genes or instances of transformers with specific attributes.
2996# If a variety of transformers with many attributes exists, default value is reasonable.
2997# However, if one has fixed set of transformers that should not change or no new transformer attributes
2998# can be added, then setting this to 0.0 is reasonable to avoid undesired loss of transformations.
2999#
3000#prob_prune_genes = 0.5
3001
3002# Unnormalized probability change model hyper parameters.
3003#
3004#prob_perturb_xgb = 0.25
3005
3006# Unnormalized probability to prune features that have low variable importance, as opposed to pruning entire instances of genes/transformers when prob_prune_genes used.
3007# If prob_prune_genes=0.0 and prob_prune_by_features==0.0 and prob_prune_by_top_features==0.0, then genes/transformers and transformed features are only pruned if they are:
3008# 1) inconsistent with the genome
3009# 2) inconsistent with the column data types
3010# 3) had no signal (for interactions and cv_in_cv for target encoding)
3011# 4) transformation failed
3012# E.g. these are toml settings are then ignored:
3013# 1) ngenes_max
3014# 2) limit_features_by_interpretability
3015# 3) varimp_threshold_at_interpretability_10
3016# 4) features_allowed_by_interpretability
3017# 5) remove_scored_0gain_genes_in_postprocessing_above_interpretability
3018# 6) nfeatures_max_threshold
3019# 7) features_cost_per_interp
3020# So this acts similar to no_drop_features, except no_drop_features also applies to shift and leak detection, constant columns are not dropped, ID columns are not dropped.
3021#prob_prune_by_features = 0.25
3022
3023# Unnormalized probability to prune features that have high variable importance,
3024# in case they have high gain but negaive perfomrance on validation and would otherwise maintain poor validation scores.
3025# Similar to prob_prune_by_features but for high gain features.
3026#prob_prune_by_top_features = 0.25
3027
3028# Maximum number of high gain features to prune for each mutation call, to control behavior of prob_prune_by_top_features.
3029#max_num_prune_by_top_features = 1
3030
3031# Like prob_prune_genes but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
3032#prob_prune_pretransformer_genes = 0.5
3033
3034# Like prob_prune_by_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
3035#prob_prune_pretransformer_by_features = 0.25
3036
3037# Like prob_prune_by_top_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
3038#prob_prune_pretransformer_by_top_features = 0.25
3039
3040# When doing restart, retrain, refit, reset these individual parameters to new toml values.
3041#override_individual_from_toml_list = "['prob_perturb_xgb', 'prob_add_genes', 'prob_addbest_genes', 'prob_prune_genes', 'prob_prune_by_features', 'prob_prune_by_top_features', 'prob_prune_pretransformer_genes', 'prob_prune_pretransformer_by_features', 'prob_prune_pretransformer_by_top_features']"
3042
3043# Max. number of trees to use for all tree model predictions. For testing, when predictions don't matter. -1 means disabled.
3044#fast_approx_max_num_trees_ever = -1
3045
3046# Max. number of trees to use for fast_approx=True (e.g., for AutoDoc/MLI).
3047#fast_approx_num_trees = 250
3048
3049# Whether to speed up fast_approx=True further, by using only one fold out of all cross-validation folds (e.g., for AutoDoc/MLI).
3050#fast_approx_do_one_fold = true
3051
3052# Whether to speed up fast_approx=True further, by using only one model out of all ensemble models (e.g., for AutoDoc/MLI).
3053#fast_approx_do_one_model = false
3054
3055# Max. number of trees to use for fast_approx_contribs=True (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3056#fast_approx_contribs_num_trees = 50
3057
3058# Whether to speed up fast_approx_contribs=True further, by using only one fold out of all cross-validation folds (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3059#fast_approx_contribs_do_one_fold = true
3060
3061# Whether to speed up fast_approx_contribs=True further, by using only one model out of all ensemble models (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3062#fast_approx_contribs_do_one_model = true
3063
3064# Approximate interval between logging of progress updates when making predictions. >=0 to enable, -1 to disable.
3065#prediction_logging_interval = 300
3066
3067# Whether to use exploit-explore logic like DAI 1.8.x. False will explore more.
3068#use_187_prob_logic = true
3069
3070# Whether to enable cross-validated OneHotEncoding+LinearModel transformer
3071#enable_ohe_linear = false
3072
3073#max_absolute_feature_expansion = 1000
3074
3075#booster_for_fs_permute = "auto"
3076
3077#model_class_name_for_fs_permute = "auto"
3078
3079#switch_from_tree_to_lgbm_if_can = true
3080
3081#model_class_name_for_shift = "auto"
3082
3083#model_class_name_for_leakage = "auto"
3084
3085#default_booster = "lightgbm"
3086
3087#default_model_class_name = "LightGBMModel"
3088
3089#num_as_cat_false_if_ohe = true
3090
3091#no_ohe_try = true
3092
3093# [DEPRECATED] Number of classes above which to include TensorFlow (if TensorFlow is enabled),
3094# even if not used exclusively.
3095# For small data this is decreased by tensorflow_num_classes_small_data_factor,
3096# and for bigger data, this is increased by tensorflow_num_classes_big_data_reduction_factor.
3097#tensorflow_added_num_classes_switch = 5
3098
3099# [DEPRECATED] Number of classes above which to only use TensorFlow (if TensorFlow is enabled),
3100# instead of others models set on 'auto' (models set to 'on' are still used).
3101# Up to tensorflow_num_classes_switch_but_keep_lightgbm, keep LightGBM.
3102# If small data, this is increased by tensorflow_num_classes_small_data_factor.
3103#tensorflow_num_classes_switch = 10
3104
3105#tensorflow_num_classes_switch_but_keep_lightgbm = 15
3106
3107#tensorflow_num_classes_small_data_factor = 3
3108
3109#tensorflow_num_classes_big_data_reduction_factor = 6
3110
3111# Compute empirical prediction intervals (based on holdout predictions).
3112#prediction_intervals = true
3113
3114# Confidence level for prediction intervals.
3115#prediction_intervals_alpha = 0.9
3116
3117# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3118# The new methods will simulate error propagation over future prediction across horizons, not intend to be realistic model prediction pattern as model does not trained in AR(auto-regressive) fashion.
3119# error_propagation: Assume normal distribution with std sigma then set bands as y_hat +/- z * sigma and inflating with horizon. Good when residuals are roughly Gaussian but relies on correct inflation model;
3120# bootstrap_simulation: Resample historical residuals (with replacement) to simulate future errors, take simulation percentiles per group/horizon and add to y_hat. Capture skew/heavy tails without parametric assumption but computational expensive and performance can drift if error distribution shift;
3121# monte_carlo_simulation: Fit a parametric error model (Gaussian), simulate many error draws, take percentiles to add to y_hat. Smoother and more stable than bootstrap with limited data but risk of Misspecification if the chosen distribution is wrong.
3122#prediction_intervals_simulation_method = ""
3123
3124# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3125# Sample size to simulate future errors, used by ``bootstrap_simulation`` and ``monte_carlo_simulation``.
3126#prediction_intervals_sampling_errors = 1000
3127
3128# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3129# A heuristic approach that will greatly reduces the memory cost due to the expensive join and group operations while being horizon-aware.
3130# Note: If buckets is 1 then only the fixed median of entire horizon will be utilized, thus no effect of horizon at all.
3131# If buckets is <= 0, then all horizons will be considered.
3132# It is highly recommend to tune this parameter for the best tradeoff as experiment may become unstable and subject to failure due to the amount of memory/cpu exhausted depending on the size of training data.
3133#prediction_intervals_bin_horizon = 0
3134
3135# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3136# Control the spread of error accumulation over the horizon:
3137# If == 1.0: intervals use the raw residual standard deviation and the growth based on strong assumption of independent, constant residuals;
3138# If > 1.0: widens intervals (more conservative). Useful if your residual underestimates true predictive uncertainty;
3139# If < 1.0 (default 0.9): narrows intervals (sharper). Useful if the raw variance + growth is too pessimistic for your data.
3140#prediction_interval_monte_carlo_calibration_ratio = 0.9
3141
3142# Appends one extra output column with predicted target class (after the per-class probabilities).
3143# Uses argmax for multiclass, and the threshold defined by the optimal scorer controlled by the
3144# 'threshold_scorer' expert setting for binary problems. This setting controls the training, validation and test
3145# set predictions (if applicable) that are created by the experiment. MOJO, scoring pipeline and client APIs
3146# control this behavior via their own version of this parameter.
3147#pred_labels = true
3148
3149# Class count above which do not use TextLin Transformer.
3150#textlin_num_classes_switch = 5
3151
3152#text_gene_dim_reduction_choices = "[50]"
3153
3154#text_gene_max_ngram = "[1, 2, 3]"
3155
3156# Max size (in tokens) of the vocabulary created during fitting of Tfidf/Count/Comatrix based text
3157# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3158# values during parameter tuning and feature evolution. Values smaller than 10000 are recommended for speed,
3159# and a reasonable set of choices include: 100, 1000, 5000, 10000, 50000, 100000, 500000.
3160# Note: If force_enable_text_comatrix_preprocess is set to True, then only selective set of top vocabularies will be used due to computational and memory complexity.
3161#text_transformers_max_vocabulary_size = "[1000, 5000]"
3162
3163# Enables caching of BERT embeddings by temporally saving the embedding vectors to the experiment directory. Set to -1 to cache all text, set to 0 to disable caching.
3164#number_of_texts_to_cache_in_bert_transformer = -1
3165
3166# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3167# that training score (on training data, not holdout) and validation score differ no more than this absolute value
3168# (i.e., stop adding trees once abs(train_score - valid_score) > max_abs_score_delta_train_valid).
3169# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3170# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3171# To disable, set to 0.0
3172#max_abs_score_delta_train_valid = 0.0
3173
3174# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3175# that training score (on training data, not holdout) and validation score differ no more than this relative value
3176# (i.e., stop adding trees once abs(train_score - valid_score) > max_rel_score_delta_train_valid * abs(train_score)).
3177# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3178# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3179# To disable, set to 0.0
3180#max_rel_score_delta_train_valid = 0.0
3181
3182# Whether to search for optimal lambda for given alpha for XGBoost GLM.
3183# If 'auto', disabled if training data has more rows * cols than final_pipeline_data_size or for multiclass experiments.
3184# Disabled always for ensemble_level = 0.
3185# Not always a good approach, can be slow for little payoff compared to grid search.
3186#
3187#glm_lambda_search = "auto"
3188
3189# If XGBoost GLM lambda search is enabled, whether to do search by the eval metric (True)
3190# or using the actual DAI scorer (False).
3191#glm_lambda_search_by_eval_metric = false
3192
3193#gbm_early_stopping_rounds_min = 1
3194
3195#gbm_early_stopping_rounds_max = 10000000000
3196
3197# Whether to enable early stopping threshold for LightGBM, varying by accuracy.
3198# Stops training once validation score changes by less than the threshold.
3199# This leads to fewer trees, usually avoiding wasteful trees, but may lower accuracy.
3200# However, it may also improve generalization by avoiding fine-tuning to validation set.
3201# 0 leads to value of 0 used, i.e. disabled
3202# > 0 means non-automatic mode using that *relative* value, scaled by first tree results of the metric for any metric.
3203# -1 means always enable, but the threshold itself is automatic (lower the accuracy, the larger the threshold).
3204# -2 means fully automatic mode, i.e. disabled unless reduce_mojo_size is true. In true, the lower the accuracy, the larger the threshold.
3205# NOTE: Automatic threshold is set so relative value of metric's min_delta in LightGBM's callback for early stopping is:
3206# if accuracy <= 1:
3207# early_stopping_threshold = 1e-1
3208# elif accuracy <= 4:
3209# early_stopping_threshold = 1e-2
3210# elif accuracy <= 7:
3211# early_stopping_threshold = 1e-3
3212# elif accuracy <= 9:
3213# early_stopping_threshold = 1e-4
3214# else:
3215# early_stopping_threshold = 0
3216#
3217#enable_early_stopping_threshold = -2.0
3218
3219#glm_optimal_refit = true
3220
3221# Whether to force enable co-occurrence text preprocess, only applicable to TextTransformer, default is False.Note: This setting will override choice made from Gene. Currently MOJO does not support co-occurrence matrix operation.
3222#force_enable_text_comatrix_preprocess = false
3223
3224# Window size of the neighboring vocabulary being counted during fitting of Co-Occurrence based text
3225# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3226# values during parameter tuning and feature evolution. Values smaller than 5 are recommended for speed and memory,
3227# defaults are 3, 2, 4.
3228#text_gene_comatrix_window_size_choices = "[3, 2, 4]"
3229
3230# Max. number of top variable importances to save per iteration (GUI can only display a max. of 14)
3231#max_varimp_to_save = 100
3232
3233# Max. number of top variable importances to show in logs during feature evolution
3234#max_num_varimp_to_log = 10
3235
3236# Max. number of top variable importance shifts to show in logs and GUI after final model built
3237#max_num_varimp_shift_to_log = 10
3238
3239# Skipping just avoids the failed transformer.
3240# Sometimes python multiprocessing swallows exceptions,
3241# so skipping and logging exceptions is also more reliable way to handle them.
3242# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3243# Features that fail are pruned from the individual.
3244# If that leaves no features in the individual, then backend tuning, feature/model tuning, final model building, etc.
3245# will still fail since DAI should not continue if all features are from a failed state.
3246#
3247#skip_transformer_failures = true
3248
3249# Skipping just avoids the failed model. Failures are logged depending upon detailed_skip_failure_messages_level."
3250# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3251#
3252#skip_model_failures = true
3253
3254# Skipping just avoids the failed scorer if among many scorers. Failures are logged depending upon detailed_skip_failure_messages_level."
3255# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3256# Default is True to avoid failing in, e.g., final model building due to a single scorer.
3257#
3258#skip_scorer_failures = true
3259
3260# Skipping avoids the failed recipe. Failures are logged depending upon detailed_skip_failure_messages_level."
3261# Default is False because runtime data recipes are one-time at start of experiment and expected to work by default.
3262#
3263#skip_data_recipe_failures = false
3264
3265# Whether can skip final model transformer failures for layer > first layer for multi-layer pipeline.
3266#can_skip_final_upper_layer_failures = true
3267
3268# How much verbosity to log failure messages for failed and then skipped transformers or models.
3269# Full failures always go to disk as *.stack files,
3270# which upon completion of experiment goes into details folder within experiment log zip file.
3271#
3272#detailed_skip_failure_messages_level = 1
3273
3274# Whether to not just log errors of recipes (models and transformers) but also show high-level notification in GUI.
3275#
3276#notify_failures = true
3277
3278# Instructions for 'Add to config.toml via toml string' in GUI expert page
3279# Self-referential toml parameter, for setting any other toml parameters as string of tomls separated by
3280# (spaces around
3281# are ok).
3282# Useful when toml parameter is not in expert mode but want per-experiment control.
3283# Setting this will override all other choices.
3284# In expert page, each time expert options saved, the new state is set without memory of any prior settings.
3285# The entered item is a fully compliant toml string that would be processed directly by toml.load().
3286# One should include 2 double quotes around the entire setting, or double quotes need to be escaped.
3287# One enters into the expert page text as follows:
3288# e.g. ``enable_glm="off"
3289# enable_xgboost_gbm="off"
3290# enable_lightgbm="on"``
3291# e.g. ``""enable_glm="off"
3292# enable_xgboost_gbm="off"
3293# enable_lightgbm="off"""``
3294# e.g. ``fixed_num_individuals=4``
3295# e.g. ``params_lightgbm="{'objective':'poisson'}"``
3296# e.g. ``""params_lightgbm="{'objective':'poisson'}"""``
3297# e.g. ``max_cores=10
3298# data_precision="float32"
3299# max_rows_feature_evolution=50000000000
3300# ensemble_accuracy_switch=11
3301# feature_engineering_effort=1
3302# target_transformer="identity"
3303# tournament_feature_style_accuracy_switch=5``
3304# e.g. ""max_cores=10
3305# data_precision="float32"
3306# max_rows_feature_evolution=50000000000
3307# ensemble_accuracy_switch=11
3308# feature_engineering_effort=1
3309# target_transformer="identity"
3310# tournament_feature_style_accuracy_switch=5""
3311# If you see: "toml.TomlDecodeError" then ensure toml is set correctly.
3312# When set in the expert page of an experiment, these changes only affect experiments and not the server
3313# Usually should keep this as empty string in this toml file.
3314#
3315#config_overrides = ""
3316
3317# Whether to dump every scored individual's variable importance to csv/tabulated/json file produces files like:
3318# individual_scored_id%d.iter%d.<hash>.features.txt for transformed features.
3319# individual_scored_id%d.iter%d.<hash>.features_orig.txt for original features.
3320# individual_scored_id%d.iter%d.<hash>.coefs.txt for absolute importance of transformed features.
3321# There are txt, tab.txt, and json formats for some files, and "best_" prefix means it is the best individual for that iteration
3322# The hash in the name matches the hash in the files produced by dump_modelparams_every_scored_indiv=true that can be used to track mutation history.
3323#dump_varimp_every_scored_indiv = false
3324
3325# Whether to dump every scored individual's model parameters to csv/tabulated/json file
3326# produces files like: individual_scored.params.[txt, csv, json].
3327# Each individual has a hash that matches the hash in the filenames produced if dump_varimp_every_scored_indiv=true,
3328# and the "unchanging hash" is the first parent hash (None if that individual is the first parent itself).
3329# These hashes can be used to track the history of the mutations.
3330#
3331#dump_modelparams_every_scored_indiv = true
3332
3333# Number of features to show in model dump every scored individual
3334#dump_modelparams_every_scored_indiv_feature_count = 3
3335
3336# Number of past mutations to show in model dump every scored individual
3337#dump_modelparams_every_scored_indiv_mutation_count = 3
3338
3339# Whether to append (false) or have separate files, files like: individual_scored_id%d.iter%d*params*, (true) for modelparams every scored indiv
3340#dump_modelparams_separate_files = false
3341
3342# Whether to dump every scored fold's timing and feature info to a *timings*.txt file
3343#
3344#dump_trans_timings = false
3345
3346# whether to delete preview timings if wrote transformer timings
3347#delete_preview_trans_timings = true
3348
3349# Attempt to create at most this many exemplars (actual rows behaving like cluster centroids) for the Aggregator
3350# algorithm in unsupervised experiment mode.
3351#
3352#unsupervised_aggregator_n_exemplars = 100
3353
3354# Attempt to create at least this many clusters for clustering algorithm in unsupervised experiment mode.
3355#
3356#unsupervised_clustering_min_clusters = 2
3357
3358# Attempt to create no more than this many clusters for clustering algorithm in unsupervised experiment mode.
3359#
3360#unsupervised_clustering_max_clusters = 10
3361
3362#use_random_text_file = false
3363
3364#runtime_estimation_train_frame = ""
3365
3366#enable_bad_scorer = false
3367
3368#debug_col_dict_prefix = ""
3369
3370#return_early_debug_col_dict_prefix = false
3371
3372#return_early_debug_preview = false
3373
3374#wizard_random_attack = false
3375
3376#wizard_enable_back_button = true
3377
3378#wizard_deployment = ""
3379
3380#wizard_repro_level = -1
3381
3382#wizard_sample_size = 100000
3383
3384#wizard_model = "rf"
3385
3386# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
3387#wizard_max_cols = 100000
3388
3389# How many seconds to allow preview to take for Wizard.
3390#wizard_timeout_preview = 30
3391
3392# How many seconds to allow leakage detection to take for Wizard.
3393#wizard_timeout_leakage = 60
3394
3395# How many seconds to allow duplicate row detection to take for Wizard.
3396#wizard_timeout_dups = 30
3397
3398# How many seconds to allow variable importance calculation to take for Wizard.
3399#wizard_timeout_varimp = 30
3400
3401# How many seconds to allow dataframe schema calculation to take for Wizard.
3402#wizard_timeout_schema = 60
3403
3404#max_reorder_experiments = 100
3405
3406# Default the upper bound number of experiments owned per user. Negative value means infinite quota.
3407#default_experiments_quota_per_user = -1
3408
3409# Dictionary of key:list of experiments quota values for users, overrides above defaults with specified set of users
3410# e.g: ``override_experiments_quota_for_users="{'user1':10,'user2':20,'user3':30}"`` to set user1 with 10 experiments quota,
3411# user2 with 20 experiments quota and user3 with 30 experiments quota.
3412#
3413#override_experiments_quota_for_users = "{}"
3414
3415# authentication_method
3416# unvalidated : Accepts user id and password. Does not validate password.
3417# none: Does not ask for user id or password. Authenticated as admin.
3418# openid: Users OpenID Connect provider for authentication. See additional OpenID settings below.
3419# oidc: Renewed OpenID Connect authentication using authorization code flow. See additional OpenID settings below.
3420# pam: Accepts user id and password. Validates user with operating system.
3421# ldap: Accepts user id and password. Validates against an ldap server. Look
3422# for additional settings under LDAP settings.
3423# local: Accepts a user id and password. Validated against an htpasswd file provided in local_htpasswd_file.
3424# ibm_spectrum_conductor: Authenticate with IBM conductor auth api.
3425# tls_certificate: Authenticate with Driverless by providing a TLS certificate.
3426# jwt: Authenticate by JWT obtained from the request metadata.
3427#
3428#authentication_method = "unvalidated"
3429
3430# Additional authentication methods that will be enabled for for the clients.Login forms for each method will be available on the``/login/<authentication_method>`` path.Comma separated list.
3431#additional_authentication_methods = "[]"
3432
3433# The default amount of time in hours before a user is signed out and must log in again. This setting is used when a default timeout value is not provided by ``authentication_method``.
3434#authentication_default_timeout_hours = 72.0
3435
3436# When enabled, the user's session is automatically prolonged, even when they are not interacting directly with the application.
3437#authentication_gui_polling_prolongs_session = false
3438
3439# OpenID Connect Settings:
3440# Refer to the OpenID Connect Basic Client Implementation Guide for details on how OpenID authentication flow works
3441# https://openid.net/specs/openid-connect-basic-1_0.html
3442# base server URI to the OpenID Provider server (ex: https://oidp.ourdomain.com
3443#auth_openid_provider_base_uri = ""
3444
3445# URI to pull OpenID config data from (you can extract most of required OpenID config from this url)
3446# usually located at: /auth/realms/master/.well-known/openid-configuration
3447#auth_openid_configuration_uri = ""
3448
3449# URI to start authentication flow
3450#auth_openid_auth_uri = ""
3451
3452# URI to make request for token after callback from OpenID server was received
3453#auth_openid_token_uri = ""
3454
3455# URI to get user information once access_token has been acquired (ex: list of groups user belongs to will be provided here)
3456#auth_openid_userinfo_uri = ""
3457
3458# URI to logout user
3459#auth_openid_logout_uri = ""
3460
3461# callback URI that OpenID provide will use to send 'authentication_code'
3462# This is OpenID callback endpoint in Driverless AI. Most OpenID providers need this to be HTTPs.
3463# (ex. https://driverless.ourdomin.com/openid/callback)
3464#auth_openid_redirect_uri = ""
3465
3466# OAuth2 grant type (usually authorization_code for OpenID, can be access_token also)
3467#auth_openid_grant_type = ""
3468
3469# OAuth2 response type (usually code)
3470#auth_openid_response_type = ""
3471
3472# Client ID registered with OpenID provider
3473#auth_openid_client_id = ""
3474
3475# Client secret provided by OpenID provider when registering Client ID
3476#auth_openid_client_secret = ""
3477
3478# Scope of info (usually openid). Can be list of more than one, space delimited, possible
3479# values listed at https://openid.net/specs/openid-connect-basic-1_0.html#Scopes
3480#auth_openid_scope = ""
3481
3482# What key in user_info JSON should we check to authorize user
3483#auth_openid_userinfo_auth_key = ""
3484
3485# What value should the key have in user_info JSON in order to authorize user
3486#auth_openid_userinfo_auth_value = ""
3487
3488# Key that specifies username in user_info JSON (we will use the value of this key as username in Driverless AI)
3489#auth_openid_userinfo_username_key = ""
3490
3491# Quote method from urllib.parse used to encode payload dict in Authentication Request
3492#auth_openid_urlencode_quote_via = "quote"
3493
3494# Key in Token Response JSON that holds the value for access token expiry
3495#auth_openid_access_token_expiry_key = "expires_in"
3496
3497# Key in Token Response JSON that holds the value for access token expiry
3498#auth_openid_refresh_token_expiry_key = "refresh_expires_in"
3499
3500# Expiration time in seconds for access token
3501#auth_openid_token_expiration_secs = 3600
3502
3503# Enables advanced matching for OpenID Connect authentication.
3504# When enabled ObjectPath (<http://objectpath.org/>) expression is used to
3505# evaluate the user identity.
3506#
3507#auth_openid_use_objectpath_match = false
3508
3509# ObjectPath (<http://objectpath.org/>) expression that will be used
3510# to evaluate whether user is allowed to login into Driverless.
3511# Any expression that evaluates to True means user is allowed to log in.
3512# Examples:
3513# Simple claim equality: `$.our_claim is "our_value"`
3514# List of claims contains required value: `"expected_role" in @.roles`
3515#
3516#auth_openid_use_objectpath_expression = ""
3517
3518# Sets token introspection URL for OpenID Connect authentication. (needs to be an absolute URL) Needs to be set when API token introspection is enabled. Is used to get the token TTL when set and IDP does not provide expires_in field in the token endpoint response.
3519#auth_openid_token_introspection_url = ""
3520
3521# Sets an URL where the user is being redirected after being logged out when set. (needs to be an absolute URL)
3522#auth_openid_end_session_endpoint_url = ""
3523
3524# If set, server will use these scopes when it asks for the token on the login. (space separated list)
3525#auth_openid_default_scopes = ""
3526
3527# Specifies the source from which user identity and username is retrieved.
3528# Currently supported sources are:
3529# user_info: Retrieves username from UserInfo endpoint response
3530# id_token: Retrieves username from ID Token using
3531# `auth_openid_id_token_username_key` claim
3532#
3533#auth_oidc_identity_source = "userinfo"
3534
3535# Claim of preferred username in a message holding the user identity, which will be used as a username in application. The user identity source is specified by `auth_oidc_identity_source`, and can be e.g. UserInfo endpoint response or ID Token
3536#auth_oidc_username_claim = ""
3537
3538# OpenID-Connect Issuer URL, which is used for automatic provider infodiscovery. E.g. https://login.microsoftonline.com/<client-id>/v2.0
3539#auth_oidc_issuer_url = ""
3540
3541# OpenID-Connect Token endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3542#auth_oidc_token_endpoint_url = ""
3543
3544# OpenID-Connect Token introspection endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3545#auth_oidc_introspection_endpoint_url = ""
3546
3547# Absolute URL to which user is redirected, after they log out from the application, in case OIDC authentication is used. Usually this is absolute URL of DriverlessAI Login page e.g. https://1.2.3.4:12345/login
3548#auth_oidc_post_logout_url = ""
3549
3550# Key-value mapping of extra HTTP query parameters in an OIDC authorization request.
3551#auth_oidc_authorization_query_params = "{}"
3552
3553# When set to True, will skip cert verification.
3554#auth_oidc_skip_cert_verification = false
3555
3556# When set will use this value as the location for the CA cert, this takes precedence over auth_oidc_skip_cert_verification.
3557#auth_oidc_ca_cert_location = ""
3558
3559# Enables option to use Bearer token for authentication with the RPC endpoint.
3560#api_token_introspection_enabled = false
3561
3562# Sets the method that is used to introspect the bearer token.
3563# OAUTH2_TOKEN_INTROSPECTION: Uses OAuth 2.0 Token Introspection (RPC 7662)
3564# endpoint to introspect the bearer token.
3565# This useful when 'openid' is used as the authentication method.
3566# Uses 'auth_openid_client_id' and 'auth_openid_client_secret' and to
3567# authenticate with the authorization server and
3568# `auth_openid_token_introspection_url` to perform the introspection.
3569#
3570#api_token_introspection_method = "OAUTH2_TOKEN_INTROSPECTION"
3571
3572# Sets the minimum of the scopes that the access token needs to have
3573# in order to pass the introspection. Space separated./
3574# This is passed to the introspection endpoint and also verified after response
3575# for the servers that don't enforce scopes.
3576# Keeping this empty turns any the verification off.
3577#
3578#api_token_oauth2_scopes = ""
3579
3580# Which field of the response returned by the token introspection endpoint should be used as a username.
3581#api_token_oauth2_username_field_name = "username"
3582
3583# Enables the option to initiate a PKCE flow from the UI in order to obtaintokens usable with Driverless clients
3584#oauth2_client_tokens_enabled = false
3585
3586# Sets up client id that will be used in the OAuth 2.0 Authorization Code Flow to obtain the tokens. Client needs to be public and be able to use PKCE with S256 code challenge.
3587#oauth2_client_tokens_client_id = ""
3588
3589# Sets up the absolute url to the authorize endpoint.
3590#oauth2_client_tokens_authorize_url = ""
3591
3592# Sets up the absolute url to the token endpoint.
3593#oauth2_client_tokens_token_url = ""
3594
3595# Sets up the absolute url to the token introspection endpoint.It's displayed in the UI so that clients can inspect the token expiration.
3596#oauth2_client_tokens_introspection_url = ""
3597
3598# Sets up the absolute to the redirect url where Driverless handles the redirect part of the Authorization Code Flow. this <Driverless base url>/oauth2/client_token
3599#oauth2_client_tokens_redirect_url = ""
3600
3601# Sets up the scope for the requested tokens. Space seprated list.
3602#oauth2_client_tokens_scope = "openid profile ai.h2o.storage"
3603
3604# ldap server domain or ip
3605#ldap_server = ""
3606
3607# ldap server port
3608#ldap_port = ""
3609
3610# Complete DN of the LDAP bind user
3611#ldap_bind_dn = ""
3612
3613# Password for the LDAP bind
3614#ldap_bind_password = ""
3615
3616# Provide Cert file location
3617#ldap_tls_file = ""
3618
3619# use true to use ssl or false
3620#ldap_use_ssl = false
3621
3622# the location in the DIT where the search will start
3623#ldap_search_base = ""
3624
3625# A string that describes what you are searching for. You can use Pythonsubstitution to have this constructed dynamically.(only {{DAI_USERNAME}} is supported)
3626#ldap_search_filter = ""
3627
3628# ldap attributes to return from search
3629#ldap_search_attributes = ""
3630
3631# specify key to find user name
3632#ldap_user_name_attribute = ""
3633
3634# When using this recipe, needs to be set to "1"
3635#ldap_recipe = "0"
3636
3637# Deprecated do not use
3638#ldap_user_prefix = ""
3639
3640# Deprecated, Use ldap_bind_dn
3641#ldap_search_user_id = ""
3642
3643# Deprecated, ldap_bind_password
3644#ldap_search_password = ""
3645
3646# Deprecated, use ldap_search_base instead
3647#ldap_ou_dn = ""
3648
3649# Deprecated, use ldap_base_dn
3650#ldap_dc = ""
3651
3652# Deprecated, use ldap_search_base
3653#ldap_base_dn = ""
3654
3655# Deprecated, use ldap_search_filter
3656#ldap_base_filter = ""
3657
3658# Path to the CRL file that will be used to verify client certificate.
3659#auth_tls_crl_file = ""
3660
3661# What field of the subject would used as source for username or other values used for further validation.
3662#auth_tls_subject_field = "CN"
3663
3664# Regular expression that will be used to parse subject field to obtain the username or other values used for further validation.
3665#auth_tls_field_parse_regexp = "(?P<username>.*)"
3666
3667# Sets up the way how user identity would be obtained
3668# REGEXP_ONLY: Will use 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3669# to extract the username from the client certificate.
3670# LDAP_LOOKUP: Will use LDAP server to lookup for the username.
3671# 'auth_tls_ldap_server', 'auth_tls_ldap_port',
3672# 'auth_tls_ldap_use_ssl', 'auth_tls_ldap_tls_file',
3673# 'auth_tls_ldap_bind_dn', 'auth_tls_ldap_bind_password'
3674# options are used to establish the connection with the LDAP server.
3675# 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3676# options are used to parse the certificate.
3677# 'auth_tls_ldap_search_base', 'auth_tls_ldap_search_filter', and
3678# 'auth_tls_ldap_username_attribute' options are used to do the
3679# lookup.
3680#
3681#auth_tls_user_lookup = "REGEXP_ONLY"
3682
3683# Hostname or IP address of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3684#auth_tls_ldap_server = ""
3685
3686# Port of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3687#auth_tls_ldap_port = ""
3688
3689# Whether to SSL to when connecting to the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3690#auth_tls_ldap_use_ssl = false
3691
3692# Path to the SSL certificate used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3693#auth_tls_ldap_tls_file = ""
3694
3695# Complete DN of the LDAP bind user used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3696#auth_tls_ldap_bind_dn = ""
3697
3698# Password for the LDAP bind used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3699#auth_tls_ldap_bind_password = ""
3700
3701# Location in the DIT where the search will start used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3702#auth_tls_ldap_search_base = ""
3703
3704# LDAP filter that will be used to lookup for the user
3705# with LDAP_LOOKUP with 'tls_certificate' authentication method.
3706# Can be built dynamically using the named capturing groups from the
3707# 'auth_tls_field_parse_regexp' for substitution.
3708# Example:
3709# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3710# ``auth_tls_ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3711#
3712#auth_tls_ldap_search_filter = ""
3713
3714# Specified what LDAP record attribute will be used as username with LDAP_LOOKUP with 'tls_certificate' authentication method.
3715#auth_tls_ldap_username_attribute = ""
3716
3717# Sets optional additional lookup filter that is performed after the
3718# user is found. This can be used for example to check whether the is member of
3719# particular group.
3720# Filter can be built dynamically from the attributes returned by the lookup.
3721# Authorization fails when search does not return any entry. If one ore more
3722# entries are returned authorization succeeds.
3723# Example:
3724# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3725# ``ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3726# ``auth_tls_ldap_authorization_lookup_filter="(&(objectClass=group)(member=uid={{uid}},dc=example,dc=com))"``
3727# If this option is empty no additional lookup is done and just a successful user
3728# lookup is enough to authorize the user.
3729#
3730#auth_tls_ldap_authorization_lookup_filter = ""
3731
3732# Base DN where to start the Authorization lookup. Used when 'auth_tls_ldap_authorization_lookup_filter' is set.
3733#auth_tls_ldap_authorization_search_base = ""
3734
3735# Sets up the way how the token will picked from the request
3736# COOKIE: Will use 'auth_jwt_cookie_name' cookie content parsed with
3737# 'auth_jwt_source_parse_regexp' to obtain the token content.
3738# HEADER: Will use 'auth_jwt_header_name' header value parsed with
3739# 'auth_jwt_source_parse_regexp' to obtain the token content.
3740#
3741#auth_jwt_token_source = "HEADER"
3742
3743# Specifies name of the cookie that will be used to obtain JWT.
3744#auth_jwt_cookie_name = ""
3745
3746# Specifies name http header that will be used to obtain JWT
3747#auth_jwt_header_name = ""
3748
3749# Regular expression that will be used to parse JWT source. Expression is in Python syntax and must contain named group 'token' with capturing the token value.
3750#auth_jwt_source_parse_regexp = "(?P<token>.*)"
3751
3752# Which JWT claim will be used as username for Driverless.
3753#auth_jwt_username_claim_name = "sub"
3754
3755# Whether to verify the signature of the JWT.
3756#auth_jwt_verify = true
3757
3758# Signature algorithm that will be used to verify the signature according to RFC 7518.
3759#auth_jwt_algorithm = "HS256"
3760
3761# Specifies the secret content for HMAC or public key for RSA and DSA signature algorithms.
3762#auth_jwt_secret = ""
3763
3764# Number of seconds after JWT still can be accepted if when already expired
3765#auth_jwt_exp_leeway_seconds = 0
3766
3767# List of accepted 'aud' claims for the JWTs. When empty, anyaudience is accepted
3768#auth_jwt_required_audience = "[]"
3769
3770# Value of the 'iss' claim that JWTs need to have in order to be accepted.
3771#auth_jwt_required_issuer = ""
3772
3773# Local password file
3774# Generating a htpasswd file: see syntax below
3775# ``htpasswd -B '<location_to_place_htpasswd_file>' '<username>'``
3776# note: -B forces use of brcypt, a secure encryption method
3777#local_htpasswd_file = ""
3778
3779# Specify the name of the report.
3780#autodoc_report_name = "report"
3781
3782# AutoDoc template path. Provide the full path to your custom AutoDoc template or leave as 'default'to generate the standard AutoDoc.
3783#autodoc_template = ""
3784
3785# Location of the additional AutoDoc templates
3786#autodoc_additional_template_folder = ""
3787
3788# Specify the AutoDoc output type.
3789#autodoc_output_type = "docx"
3790
3791# Specify the type of sub-templates to use.
3792# Options are 'auto', 'docx' or 'md'.
3793#autodoc_subtemplate_type = "auto"
3794
3795# Specify the maximum number of classes in the confusion
3796# matrix.
3797#autodoc_max_cm_size = 10
3798
3799# Specify the number of top features to display in
3800# the document. setting to -1 disables this restriction.
3801#autodoc_num_features = 50
3802
3803# Specify the minimum relative importance in order
3804# for a feature to be displayed. autodoc_min_relative_importance
3805# must be a float >= 0 and <= 1.
3806#autodoc_min_relative_importance = 0.003
3807
3808# Whether to compute permutation based feature
3809# importance.
3810#autodoc_include_permutation_feature_importance = false
3811
3812# Number of permutations to make per feature when computing
3813# feature importance.
3814#autodoc_feature_importance_num_perm = 1
3815
3816# Name of the scorer to be used to calculate feature
3817# importance. Leave blank to use experiments default scorer.
3818#autodoc_feature_importance_scorer = ""
3819
3820# The autodoc_pd_max_rows configuration controls the
3821# number of rows shown for the partial dependence plots (PDP) and Shapley
3822# values summary plot in the AutoDoc. Random sampling is used for
3823# datasets with more than the autodoc_pd_max_rows limit.
3824#autodoc_pd_max_rows = 10000
3825
3826# Maximum number of seconds Partial Dependency computation
3827# can take when generating report. Set to -1 for no time limit.
3828#autodoc_pd_max_runtime = 45
3829
3830# Whether to enable fast approximation for predictions that are needed for the
3831# generation of partial dependence plots. Can help when want to create many PDP
3832# plots in short time. Amount of approximation is controlled by fast_approx_num_trees,
3833# fast_approx_do_one_fold, fast_approx_do_one_model experiment expert settings.
3834#
3835#autodoc_pd_fast_approx = true
3836
3837# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
3838# Similar to max_int_as_cat_uniques used for experiment, but here used to control PDP making.
3839#autodoc_pd_max_int_as_cat_uniques = 50
3840
3841# Number of standard deviations outside of the range of
3842# a column to include in partial dependence plots. This shows how the
3843# model will react to data it has not seen before.
3844#autodoc_out_of_range = 3
3845
3846# Specify the number of rows to include in PDP and ICE plot
3847# if individual rows are not specified.
3848#autodoc_num_rows = 0
3849
3850# Whether to include population stability index if
3851# experiment is binary classification/regression.
3852#autodoc_population_stability_index = false
3853
3854# Number of quantiles to use for population stability index
3855# .
3856#autodoc_population_stability_index_n_quantiles = 10
3857
3858# Whether to include prediction statistics information if
3859# experiment is binary classification/regression.
3860#autodoc_prediction_stats = false
3861
3862# Number of quantiles to use for prediction statistics.
3863#autodoc_prediction_stats_n_quantiles = 20
3864
3865# Whether to include response rates information if
3866# experiment is binary classification.
3867#autodoc_response_rate = false
3868
3869# Number of quantiles to use for response rates information
3870# .
3871#autodoc_response_rate_n_quantiles = 10
3872
3873# Whether to show the Gini Plot.
3874#autodoc_gini_plot = false
3875
3876# Show Shapley values results in the AutoDoc.
3877#autodoc_enable_shapley_values = true
3878
3879# The number feature in a KLIME global GLM coefficients
3880# table. Must be an integer greater than 0 or -1. To
3881# show all features set to -1.
3882#autodoc_global_klime_num_features = 10
3883
3884# Set the number of KLIME global GLM coefficients tables. Set
3885# to 1 to show one table with coefficients sorted by absolute
3886# value. Set to 2 to two tables one with the top positive
3887# coefficients and one with the top negative coefficients.
3888#autodoc_global_klime_num_tables = 1
3889
3890# Number of features to be show in data summary. Value
3891# must be an integer. Values lower than 1, f.e. 0 or -1, indicate that
3892# all columns should be shown.
3893#autodoc_data_summary_col_num = -1
3894
3895# Whether to show all config settings. If False, only
3896# the changed settings (config overrides) are listed, otherwise all
3897# settings are listed.
3898#autodoc_list_all_config_settings = false
3899
3900# Line length of the keras model architecture summary. Must
3901# be an integer greater than 0 or -1. To use the default line length set
3902# value -1.
3903#autodoc_keras_summary_line_length = -1
3904
3905# Maximum number of lines shown for advanced transformer
3906# architecture in the Feature section. Note that the full architecture
3907# can be found in the Appendix.
3908#autodoc_transformer_architecture_max_lines = 30
3909
3910# Show full NLP/Image transformer architecture in
3911# the Appendix.
3912#autodoc_full_architecture_in_appendix = false
3913
3914# Specify whether to show the full glm coefficient
3915# table(s) in the appendix. coef_table_appendix_results_table must be
3916# a boolean: True to show tables in appendix, False to not show them
3917# .
3918#autodoc_coef_table_appendix_results_table = false
3919
3920# Set the number of models for which a glm coefficients
3921# table is shown in the AutoDoc. coef_table_num_models must
3922# be -1 or an integer >= 1 (-1 shows all models).
3923#autodoc_coef_table_num_models = 1
3924
3925# Set the number of folds per model for which a glm
3926# coefficients table is shown in the AutoDoc.
3927# coef_table_num_folds must be -1 or an integer >= 1
3928# (-1 shows all folds per model).
3929#autodoc_coef_table_num_folds = -1
3930
3931# Set the number of coefficients to show within a glm
3932# coefficients table in the AutoDoc. coef_table_num_coef, controls
3933# the number of rows shown in a glm table and must be -1 or
3934# an integer >= 1 (-1 shows all coefficients).
3935#autodoc_coef_table_num_coef = 50
3936
3937# Set the number of classes to show within a glm
3938# coefficients table in the AutoDoc. coef_table_num_classes controls
3939# the number of class-columns shown in a glm table and must be -1 or
3940# an integer >= 4 (-1 shows all classes).
3941#autodoc_coef_table_num_classes = 9
3942
3943# When histogram plots are available: The number of
3944# top (default 10) features for which to show histograms.
3945#autodoc_num_histogram_plots = 10
3946
3947#pdp_max_threads = -1
3948
3949# If True, will force AutoDoc to run in only the main server, not on remote workers in case of a multi-node setup
3950#autodoc_force_singlenode = false
3951
3952# IP address and port of autoviz process.
3953#vis_server_ip = "127.0.0.1"
3954
3955# IP and port of autoviz process.
3956#vis_server_port = 12346
3957
3958# Maximum number of columns autoviz will work with.
3959# If dataset has more columns than this number,
3960# autoviz will pick columns randomly, prioritizing numerical columns
3961#
3962#autoviz_max_num_columns = 50
3963
3964#autoviz_max_aggregated_rows = 500
3965
3966# When enabled, experiment will try to use feature transformations recommended by Autoviz
3967#autoviz_enable_recommendations = true
3968
3969# Key-value pairs of column names, and transformations that Autoviz recommended
3970#autoviz_recommended_transformation = "{}"
3971
3972#autoviz_enable_transformer_acceptance_tests = false
3973
3974# Enable custom recipes.
3975#enable_custom_recipes = true
3976
3977# Enable uploading of custom recipes from local file system.
3978#enable_custom_recipes_upload = true
3979
3980# Enable downloading of custom recipes from external URL.
3981#enable_custom_recipes_from_url = true
3982
3983# Enable upload recipe files to be zip, containing custom recipe(s) in root folder,
3984# while any other code or auxiliary files must be in some sub-folder.
3985#
3986#enable_custom_recipes_from_zip = true
3987
3988#must_have_custom_transformers = false
3989
3990#must_have_custom_transformers_2 = false
3991
3992#must_have_custom_transformers_3 = false
3993
3994#must_have_custom_models = false
3995
3996#must_have_custom_scorers = false
3997
3998# When set to true, it enable downloading custom recipes third party packages from the web, otherwise the python environment will be transferred from main worker.
3999#enable_recreate_custom_recipes_env = true
4000
4001#extra_migration_custom_recipes_missing_modules = false
4002
4003# Include custom recipes in default inclusion lists (warning: enables all custom recipes)
4004#include_custom_recipes_by_default = false
4005
4006#force_include_custom_recipes_by_default = false
4007
4008# Whether to enable use of H2O recipe server. In some casees, recipe server (started at DAI startup) may enter into an unstable state, and this might affect other experiments. Then one can avoid triggering use of the recipe server by setting this to false.
4009#enable_h2o_recipes = true
4010
4011# URL of H2O instance for use by transformers, models, or scorers.
4012#h2o_recipes_url = "None"
4013
4014# IP of H2O instance for use by transformers, models, or scorers.
4015#h2o_recipes_ip = "None"
4016
4017# Port of H2O instance for use by transformers, models, or scorers. No other instances must be on that port or on next port.
4018#h2o_recipes_port = 50361
4019
4020# Name of H2O instance for use by transformers, models, or scorers.
4021#h2o_recipes_name = "None"
4022
4023# Number of threads for H2O instance for use by transformers, models, or scorers. -1 for all.
4024#h2o_recipes_nthreads = 8
4025
4026# Log Level of H2O instance for use by transformers, models, or scorers.
4027#h2o_recipes_log_level = "None"
4028
4029# Maximum memory size of H2O instance for use by transformers, models, or scorers.
4030#h2o_recipes_max_mem_size = "None"
4031
4032# Minimum memory size of H2O instance for use by transformers, models, or scorers.
4033#h2o_recipes_min_mem_size = "None"
4034
4035# General user overrides of kwargs dict to pass to h2o.init() for recipe server.
4036#h2o_recipes_kwargs = "{}"
4037
4038# Number of trials to give h2o-3 recipe server to start.
4039#h2o_recipes_start_trials = 5
4040
4041# Number of seconds to sleep before starting h2o-3 recipe server.
4042#h2o_recipes_start_sleep0 = 1
4043
4044# Number of seconds to sleep between trials of starting h2o-3 recipe server.
4045#h2o_recipes_start_sleep = 5
4046
4047# Lock source for recipes to a specific github repo.
4048# If True then all custom recipes must come from the repo specified in setting: custom_recipes_git_repo
4049#custom_recipes_lock_to_git_repo = false
4050
4051# If custom_recipes_lock_to_git_repo is set to True, only this repo can be used to pull recipes from
4052#custom_recipes_git_repo = "https://github.com/h2oai/driverlessai-recipes"
4053
4054# Branch constraint for recipe source repo. Any branch allowed if unset or None
4055#custom_recipes_git_branch = "None"
4056
4057#custom_recipes_excluded_filenames_from_repo_download = "[]"
4058
4059#allow_old_recipes_use_datadir_as_data_directory = true
4060
4061# Internal helper to allow memory of if changed recipe
4062#last_recipe = ""
4063
4064# Dictionary to control recipes for each experiment and particular custom recipes.
4065# E.g. if inserting into the GUI as any toml string, can use:
4066# ""recipe_dict="{'key1': 2, 'key2': 'value2'}"""
4067# E.g. if putting into config.toml as a dict, can use:
4068# recipe_dict="{'key1': 2, 'key2': 'value2'}"
4069#
4070#recipe_dict = "{}"
4071
4072# Dictionary to control some mutation parameters.
4073# E.g. if inserting into the GUI as any toml string, can use:
4074# ""mutation_dict="{'key1': 2, 'key2': 'value2'}"""
4075# E.g. if putting into config.toml as a dict, can use:
4076# mutation_dict="{'key1': 2, 'key2': 'value2'}"
4077#
4078#mutation_dict = "{}"
4079
4080#enable_custom_transformers = true
4081
4082#enable_custom_pretransformers = true
4083
4084#enable_custom_models = true
4085
4086#enable_custom_scorers = true
4087
4088#enable_custom_datas = true
4089
4090#enable_custom_explainers = true
4091
4092#enable_custom_individuals = true
4093
4094#enable_connectors_recipes = true
4095
4096# Whether to validate recipe names provided in included lists, like included_models,
4097# or (if False) whether to just log warning to server logs and ignore any invalid names of recipes.
4098#
4099#raise_on_invalid_included_list = false
4100
4101#contrib_relative_directory = "contrib"
4102
4103# location of custom recipes packages installed (relative to data_directory)
4104# We will try to install packages dynamically, but can also do (before or after server started):
4105# (inside docker running docker instance if running docker, or as user server is running as (e.g. dai user) if deb/tar native installation:
4106# PYTHONPATH=<full tmp dir>/<contrib_env_relative_directory>/lib/python3.6/site-packages/ <path to dai>dai-env.sh python -m pip install --prefix=<full tmp dir>/<contrib_env_relative_directory> <packagename> --upgrade --upgrade-strategy only-if-needed --log-file pip_log_file.log
4107# where <path to dai> is /opt/h2oai/dai/ for native rpm/deb installation
4108# Note can also install wheel files if <packagename> is name of wheel file or archive.
4109#
4110#contrib_env_relative_directory = "contrib/env"
4111
4112# List of package versions to ignore. Useful when small version change but likely to function still with old package version.
4113#
4114#ignore_package_version = "[]"
4115
4116# List of package versions to remove if encounter conflict. Useful when want new version of package, and old recipes likely to function still.
4117#
4118#clobber_package_version = "['catboost', 'h2o_featurestore']"
4119
4120# List of package versions to remove if encounter conflict.
4121# Useful when want new version of package, and old recipes likely to function still.
4122# Also useful when do not need to use old versions of recipes even if they would no longer function.
4123#
4124#swap_package_version = "{'catboost==0.26.1': 'catboost==1.2.5', 'catboost==0.25.1': 'catboost==1.2.5', 'catboost==0.24.1': 'catboost==1.2.5', 'catboost==1.0.4': 'catboost==1.2.5', 'catboost==1.0.5': 'catboost==1.2.5', 'catboost==1.0.6': 'catboost==1.2.5', 'catboost': 'catboost==1.2.5'}"
4125
4126# If user uploads recipe with changes to package versions,
4127# allow upgrade of package versions.
4128# If DAI protected packages are attempted to be changed, can try using pip_install_options toml with ['--no-deps'].
4129# Or to ignore entirely DAI versions of packages, can try using pip_install_options toml with ['--ignore-installed'].
4130# Any other experiments relying on recipes with such packages will be affected, use with caution.
4131#allow_version_change_user_packages = false
4132
4133# pip install retry for call to pip. Sometimes need to try twice
4134#pip_install_overall_retries = 2
4135
4136# pip install verbosity level (number of -v's given to pip, up to 3
4137#pip_install_verbosity = 2
4138
4139# pip install timeout in seconds, Sometimes internet issues would mean want to fail faster
4140#pip_install_timeout = 15
4141
4142# pip install retry count
4143#pip_install_retries = 5
4144
4145# Whether to use DAI constraint file to help pip handle versions. pip can make mistakes and try to install updated packages for no reason.
4146#pip_install_use_constraint = true
4147
4148# pip install options: string of list of other options, e.g. ['--proxy', 'http://user:password@proxyserver:port']
4149#pip_install_options = "[]"
4150
4151# Whether to enable basic acceptance testing. Tests if can pickle the state, etc.
4152#enable_basic_acceptance_tests = true
4153
4154# Whether acceptance tests should run for custom genes / models / scorers / etc.
4155#enable_acceptance_tests = true
4156
4157#acceptance_tests_use_weather_data = false
4158
4159#acceptance_tests_mojo_benchmark = false
4160
4161# Whether to skip disabled recipes (True) or fail and show GUI message (False).
4162#skip_disabled_recipes = false
4163
4164# Minutes to wait until a recipe's acceptance testing is aborted. A recipe is rejected if acceptance
4165# testing is enabled and times out.
4166# One may also set timeout for a specific recipe by setting the class's staticmethod function called
4167# acceptance_test_timeout to return number of minutes to wait until timeout doing acceptance testing.
4168# This timeout does not include the time to install required packages.
4169#
4170#acceptance_test_timeout = 20.0
4171
4172# Whether to re-check recipes during server startup (if per_user_directories == false)
4173# or during user login (if per_user_directories == true).
4174# If any inconsistency develops, the bad recipe will be removed during re-doing acceptance testing. This process
4175# can make start-up take alot longer for many recipes, but in LTS releases the risk of recipes becoming out of date
4176# is low. If set to false, will disable acceptance re-testing during sever start but note that previews or experiments may fail if those inconsistent recipes are used.
4177# Such inconsistencies can occur when API changes for recipes or more aggressive acceptance tests are performed.
4178#
4179#contrib_reload_and_recheck_server_start = true
4180
4181# Whether to at least install packages required for recipes during server startup (if per_user_directories == false)
4182# or during user login (if per_user_directories == true).
4183# Important to keep True so any later use of recipes (that have global packages installed) will work.
4184#
4185#contrib_install_packages_server_start = true
4186
4187# Whether to re-check recipes after uploaded from main server to worker in multinode.
4188# Expensive for every task that has recipes to do this.
4189#contrib_reload_and_recheck_worker_tasks = false
4190
4191#data_recipe_isolate = true
4192
4193# Space-separated string list of URLs for recipes that are loaded at user login time
4194#server_recipe_url = ""
4195
4196#num_rows_acceptance_test_custom_transformer = 200
4197
4198#num_rows_acceptance_test_custom_model = 100
4199
4200# List of recipes (per dict key by type) that are applicable for given experiment. This is especially relevant
4201# for situations such as new `experiment with same params` where the user should be able to
4202# use the same recipe versions as the parent experiment if he/she wishes to.
4203#
4204#recipe_activation = "{'transformers': [], 'models': [], 'scorers': [], 'data': [], 'individuals': []}"
4205
4206# File System Support
4207# upload : standard upload feature
4208# file : local file system/server file system
4209# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
4210# dtap : Blue Data Tap file system, remember to configure the DTap section below
4211# s3 : Amazon S3, optionally configure secret and access key below
4212# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
4213# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
4214# minio : Minio Cloud Storage, remember to configure secret and access key below
4215# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
4216# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
4217# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
4218# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
4219# hive: Hive Connector, remember to configure Hive below. (hive_app_configs)
4220# recipe_file: Custom recipe file upload
4221# recipe_url: Custom recipe upload via url
4222# h2o_drive: H2O Drive, remember to configure `h2o_drive_endpoint_url` below
4223# feature_store: Feature Store, remember to configure feature_store_endpoint_url below
4224# databricks: Databricks connector.
4225# delta_table: Delta Table connector.
4226#
4227#enabled_file_systems = "['upload', 'file', 'hdfs', 's3', 'recipe_file', 'recipe_url']"
4228
4229#max_files_listed = 100
4230
4231# The option disable access to DAI data_directory from file browser
4232#file_hide_data_directory = true
4233
4234# Enable usage of path filters
4235#file_path_filtering_enabled = false
4236
4237# List of absolute path prefixes to restrict access to in file system browser.
4238# First add the following environment variable to your command line to enable this feature:
4239# file_path_filtering_enabled=true
4240# This feature can be used in the following ways (using specific path or using logged user's directory):
4241# file_path_filter_include="['/data/stage']"
4242# file_path_filter_include="['/data/stage','/data/prod']"
4243# file_path_filter_include=/home/{{DAI_USERNAME}}/
4244# file_path_filter_include="['/home/{{DAI_USERNAME}}/','/data/stage','/data/prod']"
4245#
4246#file_path_filter_include = "[]"
4247
4248# (Required) HDFS connector
4249# Specify HDFS Auth Type, allowed options are:
4250# noauth : (default) No authentication needed
4251# principal : Authenticate with HDFS with a principal user (DEPRECTATED - use `keytab` auth type)
4252# keytab : Authenticate with a Key tab (recommended). If running
4253# DAI as a service, then the Kerberos keytab needs to
4254# be owned by the DAI user.
4255# keytabimpersonation : Login with impersonation using a keytab
4256#hdfs_auth_type = "noauth"
4257
4258# Kerberos app principal user. Required when hdfs_auth_type='keytab'; recommended otherwise.
4259#hdfs_app_principal_user = ""
4260
4261# Deprecated - Do Not Use, login user is taken from the user name from login
4262#hdfs_app_login_user = ""
4263
4264# JVM args for HDFS distributions, provide args seperate by space
4265# -Djava.security.krb5.conf=<path>/krb5.conf
4266# -Dsun.security.krb5.debug=True
4267# -Dlog4j.configuration=file:///<path>log4j.properties
4268#hdfs_app_jvm_args = ""
4269
4270# hdfs class path
4271#hdfs_app_classpath = ""
4272
4273# List of supported DFS schemas. Ex. "['hdfs://', 'maprfs://', 'swift://']"
4274# Supported schemas list is used as an initial check to ensure valid input to connector
4275#
4276#hdfs_app_supported_schemes = "['hdfs://', 'maprfs://', 'swift://']"
4277
4278# Maximum number of files viewable in connector ui. Set to larger number to view more files
4279#hdfs_max_files_listed = 100
4280
4281# Starting HDFS path displayed in UI HDFS browser
4282#hdfs_init_path = "hdfs://"
4283
4284# Starting HDFS path for the artifacts upload operations
4285#hdfs_upload_init_path = "hdfs://"
4286
4287# Enables the multi-user mode for MapR integration, which allows to have MapR ticket per user.
4288#enable_mapr_multi_user_mode = false
4289
4290# Blue Data DTap connector settings are similar to HDFS connector settings.
4291# Specify DTap Auth Type, allowed options are:
4292# noauth : No authentication needed
4293# principal : Authenticate with DTab with a principal user
4294# keytab : Authenticate with a Key tab (recommended). If running
4295# DAI as a service, then the Kerberos keytab needs to
4296# be owned by the DAI user.
4297# keytabimpersonation : Login with impersonation using a keytab
4298# NOTE: "hdfs_app_classpath" and "core_site_xml_path" are both required to be set for DTap connector
4299#dtap_auth_type = "noauth"
4300
4301# Dtap (HDFS) config folder path , can contain multiple config files
4302#dtap_config_path = ""
4303
4304# Path of the principal key tab file, dtap_key_tab_path is deprecated. Please use dtap_keytab_path
4305#dtap_key_tab_path = ""
4306
4307# Path of the principal key tab file
4308#dtap_keytab_path = ""
4309
4310# Kerberos app principal user (recommended)
4311#dtap_app_principal_user = ""
4312
4313# Specify the user id of the current user here as user@realm
4314#dtap_app_login_user = ""
4315
4316# JVM args for DTap distributions, provide args seperate by space
4317#dtap_app_jvm_args = ""
4318
4319# DTap (HDFS) class path. NOTE: set 'hdfs_app_classpath' also
4320#dtap_app_classpath = ""
4321
4322# Starting DTAP path displayed in UI DTAP browser
4323#dtap_init_path = "dtap://"
4324
4325# S3 Connector credentials
4326#aws_access_key_id = ""
4327
4328# S3 Connector credentials
4329#aws_secret_access_key =
4330
4331# S3 Connector credentials
4332#aws_role_arn = ""
4333
4334# What region to use when none is specified in the s3 url.
4335# Ignored when aws_s3_endpoint_url is set.
4336#
4337#aws_default_region = ""
4338
4339# Sets endpoint URL that will be used to access S3.
4340#aws_s3_endpoint_url = ""
4341
4342# If set to true S3 Connector will try to to obtain credentials associated with
4343# the role attached to the EC2 instance.
4344#aws_use_ec2_role_credentials = false
4345
4346# Starting S3 path displayed in UI S3 browser
4347#s3_init_path = "s3://"
4348
4349# S3 Connector will skip cert verification if this is set to true, (mostly used for S3-like connectors, e.g. Ceph)
4350#s3_skip_cert_verification = false
4351
4352# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the S3 connector
4353#s3_connector_cert_location = ""
4354
4355# GCS Connector credentials
4356# example (suggested) -- '/licenses/my_service_account_json.json'
4357#gcs_path_to_service_account_json = ""
4358
4359# GCS Connector service account credentials in JSON, this configuration takes precedence over gcs_path_to_service_account_json.
4360#gcs_service_account_json = "{}"
4361
4362# GCS Connector impersonated account
4363#gbq_access_impersonated_account = ""
4364
4365# Starting GCS path displayed in UI GCS browser
4366#gcs_init_path = "gs://"
4367
4368# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google Cloud Storage
4369#gcs_access_token_scopes = ""
4370
4371# When ``google_cloud_use_oauth`` is enabled, Google Cloud client cannot automatically infer the default project, thus it must be explicitly specified
4372#gcs_default_project_id = ""
4373
4374# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google BigQuery
4375#gbq_access_token_scopes = ""
4376
4377# By default the DriverlessAI Google Cloud Storage and BigQuery connectors are using service account file to retrieve authentication credentials.When enabled, the Storage and BigQuery connectors will use OAuth2 user access tokens to authenticate in Google Cloud instead.
4378#google_cloud_use_oauth = false
4379
4380# Minio Connector credentials
4381#minio_endpoint_url = ""
4382
4383# Minio Connector credentials
4384#minio_access_key_id = ""
4385
4386# Minio Connector credentials
4387#minio_secret_access_key =
4388
4389# Minio Connector will skip cert verification if this is set to true
4390#minio_skip_cert_verification = false
4391
4392# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the Minio connector
4393#minio_connector_cert_location = ""
4394
4395# Starting Minio path displayed in UI Minio browser
4396#minio_init_path = "/"
4397
4398# H2O Drive server endpoint URL
4399#h2o_drive_endpoint_url = ""
4400
4401# Space seperated list of OpenID scopes for the access token used by the H2O Drive connector
4402#h2o_drive_access_token_scopes = ""
4403
4404# Maximum duration (in seconds) for a session with the H2O Drive
4405#h2o_drive_session_duration = 10800
4406
4407# Recommended Provide: url, user, password
4408# Optionally Provide: account, user, password
4409# Example URL: https://<snowflake_account>.<region>.snowflakecomputing.com
4410# Snowflake Connector credentials
4411#snowflake_url = ""
4412
4413# Snowflake Connector credentials
4414#snowflake_user = ""
4415
4416# Snowflake Connector credentials
4417#snowflake_password = ""
4418
4419# Snowflake Connector credentials
4420#snowflake_account = ""
4421
4422# Snowflake Connector authenticator, can be used when Snowflake is using native SSO with Okta.
4423# E.g.: snowflake_authenticator = "https://<okta_account_name>.okta.com"
4424#
4425#snowflake_authenticator = ""
4426
4427# Keycloak endpoint for retrieving external IdP tokens for Snowflake. (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4428#snowflake_keycloak_broker_token_endpoint = ""
4429
4430# Token type that should be used from the response from Keycloak endpoint for retrieving external IdP tokens for Snowflake. See `snowflake_keycloak_broker_token_endpoint`.
4431#snowflake_keycloak_broker_token_type = "access_token"
4432
4433# ID of the OAuth client configured in H2O Secure Store for authentication with Snowflake.
4434#snowflake_h2o_secure_store_oauth_client_id = ""
4435
4436# Snowflake hostname to connect to when running Driverless AI in Snowpark Container Services.
4437#snowflake_host = ""
4438
4439# Snowflake port to connect to when running Driverless AI in Snowpark Container Services.
4440#snowflake_port = ""
4441
4442# Snowflake filepath that stores the token of the session, when running
4443# Driverless AI in Snowpark Container Services.
4444# E.g.: snowflake_session_token_filepath = "/snowflake/session/token"
4445#
4446#snowflake_session_token_filepath = ""
4447
4448# Setting to allow or disallow Snowflake connector from using Snowflake stages during queries.
4449# True - will permit the connector to use stages and generally improves performance. However,
4450# if the Snowflake user does not have permission to create/use stages will end in errors.
4451# False - will prevent the connector from using stages, thus Snowflake users without permission
4452# to create/use stages will have successful queries, however may significantly negatively impact
4453# query performance.
4454#
4455#snowflake_allow_stages = true
4456
4457# Sets the file format to be used when Snowflake stages are enabled for
4458# query execution.
4459#
4460#snowflake_stages_file_format = "CSV"
4461
4462# Sets the upper size limit (in bytes) of each file to be generated when
4463# Snowflake stages are enabled for query execution.
4464#
4465#snowflake_stages_max_file_size = 16777216
4466
4467# Optional schema name where temporary Snowflake stages should be created.
4468# If set, the Snowflake connector creates all temporary stages in this schema instead of the table’s schema.
4469# Requirements:
4470# - The Snowflake user/role must have permission to create and use stages
4471# in the specified schema.
4472# - If unset, the Snowflake connector creates stages in the table’s schema
4473# (default Snowflake behavior).
4474# Applies only when 'snowflake_allow_stages' is True
4475#
4476#snowflake_staging_schema = ""
4477
4478# Sets the number of rows to be fetched by Snowflake cursor at one time. This is only used if setting
4479# `snowflake_allow_stages` is set to False, may help with performance depending on the type and size
4480# of data being queried.
4481#
4482#snowflake_batch_size = 10000
4483
4484# KDB Connector credentials
4485#kdb_user = ""
4486
4487# KDB Connector credentials
4488#kdb_password = ""
4489
4490# KDB Connector credentials
4491#kdb_hostname = ""
4492
4493# KDB Connector credentials
4494#kdb_port = ""
4495
4496# KDB Connector credentials
4497#kdb_app_classpath = ""
4498
4499# KDB Connector credentials
4500#kdb_app_jvm_args = ""
4501
4502# Account name for Azure Blob Store Connector
4503#azure_blob_account_name = ""
4504
4505# Account key for Azure Blob Store Connector
4506#azure_blob_account_key =
4507
4508# Connection string for Azure Blob Store Connector
4509#azure_connection_string =
4510
4511# SAS token for Azure Blob Store Connector
4512#azure_sas_token =
4513
4514# Starting Azure blob store path displayed in UI Azure blob store browser
4515#azure_blob_init_path = "https://"
4516
4517# When enabled, Azure Blob Store Connector will use access token derived from the credentials received on login with OpenID Connect.
4518#azure_blob_use_access_token = false
4519
4520# Configures the scopes for the access token used by Azure Blob Store Connector when the azure_blob_use_access_token us enabled. (space separated list)
4521#azure_blob_use_access_token_scopes = "https://storage.azure.com/.default"
4522
4523# Sets the source of the access token for accessing the Azure bob store
4524# KEYCLOAK: Will exchange the session access token for the federated
4525# refresh token with Keycloak and use it to obtain the access token
4526# directly with the Azure AD.
4527# SESSION: Will use the access token derived from the credentials
4528# received on login with OpenID Connect.
4529#
4530#azure_blob_use_access_token_source = "SESSION"
4531
4532# Application (client) ID registered on Azure AD when the KEYCLOAK source is enabled.
4533#azure_blob_keycloak_aad_client_id = ""
4534
4535# Application (client) secret when the KEYCLOAK source is enabled.
4536#azure_blob_keycloak_aad_client_secret = ""
4537
4538# A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4539#azure_blob_keycloak_aad_auth_uri = ""
4540
4541# Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4542#azure_blob_keycloak_broker_token_endpoint = ""
4543
4544# (DEPRECATED, use azure_blob_use_access_token and
4545# azure_blob_use_access_token_source="KEYCLOAK" instead.)
4546# (When enabled only DEPRECATED options azure_ad_client_id,
4547# azure_ad_client_secret, azure_ad_auth_uri and
4548# azure_keycloak_idp_token_endpoint will be effective)
4549# This is equivalent to setting
4550# azure_blob_use_access_token_source = "KEYCLOAK"
4551# and setting azure_blob_keycloak_aad_client_id,
4552# azure_blob_keycloak_aad_client_secret,
4553# azure_blob_keycloak_aad_auth_uri and
4554# azure_blob_keycloak_broker_token_endpoint
4555# options.
4556# )
4557# If true, enable the Azure Blob Storage Connector to use Azure AD tokens
4558# obtained from the Keycloak for auth.
4559#
4560#azure_enable_token_auth_aad = false
4561
4562# (DEPRECATED, use azure_blob_keycloak_aad_client_id instead.) Application (client) ID registered on Azure AD
4563#azure_ad_client_id = ""
4564
4565# (DEPRECATED, use azure_blob_keycloak_aad_client_secret instead.) Application Client Secret
4566#azure_ad_client_secret = ""
4567
4568# (DEPRECATED, use azure_blob_keycloak_aad_auth_uri instead)A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4569#azure_ad_auth_uri = ""
4570
4571# (DEPRECATED, use azure_blob_use_access_token_scopes instead.)Scopes requested to access a protected API (a resource).
4572#azure_ad_scopes = "[]"
4573
4574# (DEPRECATED, use azure_blob_keycloak_broker_token_endpoint instead.)Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4575#azure_keycloak_idp_token_endpoint = ""
4576
4577# ID of the application's Microsoft Entra tenant, also called its 'directory' ID.
4578# This is used for Azure Workload Identity.
4579#
4580#azure_workload_identity_tenant_id = ""
4581
4582# The client ID of a Microsoft Entra app registration.
4583# This is used for Azure Workload Identity.
4584#
4585#azure_workload_identity_client_id = ""
4586
4587# The path to a file containing a Kubernetes service account token that authenticates the identity.
4588# This is used for Azure Workload Identity.
4589#
4590#azure_workload_identity_token_file_path = ""
4591
4592# Desired scopes for the access token when the Databricks connector is using
4593# Azure Workflow Identity authentication. At least one scope should be specified.
4594# For more information about scopes, see https://learn.microsoft.com/entra/identity-platform/scopes-oidc.
4595#
4596#databricks_azure_workload_identity_scopes = ""
4597
4598# Desired scopes for the access token when the Azure Blob connector is using
4599# Azure Workflow Identity authentication. At least one scope should be specified.
4600# For more information about scopes, see https://learn.microsoft.com/entra/identity-platform/scopes-oidc.
4601#
4602#azure_blob_workload_identity_scopes = ""
4603
4604# Name of the Databricks workspace instance. Please refer
4605# https://learn.microsoft.com/en-us/azure/databricks/workspace/workspace-details
4606# on how to obtains the name of your Databricks workspace instance.
4607#
4608#databricks_workspace_instance_name = ""
4609
4610# Sets the number of rows to be fetched by the Databricks cursor at one time.
4611#databricks_batch_size = 100000
4612
4613# Configuration for JDBC Connector.
4614# JSON/Dictionary String with multiple keys.
4615# Format as a single line without using carriage returns (the following example is formatted for readability).
4616# Use triple quotations to ensure that the text is read as a single string.
4617# Example:
4618# '{
4619# "postgres": {
4620# "url": "jdbc:postgresql://ip address:port/postgres",
4621# "jarpath": "/path/to/postgres_driver.jar",
4622# "classpath": "org.postgresql.Driver"
4623# },
4624# "mysql": {
4625# "url":"mysql connection string",
4626# "jarpath": "/path/to/mysql_driver.jar",
4627# "classpath": "my.sql.classpath.Driver"
4628# }
4629# }'
4630#
4631#jdbc_app_configs = "{}"
4632
4633# extra jvm args for jdbc connector
4634#jdbc_app_jvm_args = "-Xmx4g"
4635
4636# alternative classpath for jdbc connector
4637#jdbc_app_classpath = ""
4638
4639# Configuration for Hive Connector.
4640# Note that inputs are similar to configuring HDFS connectivity.
4641# important keys:
4642# * hive_conf_path - path to hive configuration, may have multiple files. typically: hive-site.xml, hdfs-site.xml, etc
4643# * auth_type - one of `noauth`, `keytab`, `keytabimpersonation` for kerberos authentication
4644# * keytab_path - path to the kerberos keytab to use for authentication, can be "" if using `noauth` auth_type
4645# * principal_user - Kerberos app principal user. Required when using auth_type `keytab` or `keytabimpersonation`
4646# JSON/Dictionary String with multiple keys. Example:
4647# '{
4648# "hive_connection_1": {
4649# "hive_conf_path": "/path/to/hive/conf",
4650# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4651# "keytab_path": "/path/to/<filename>.keytab",
4652# "principal_user": "hive/localhost@EXAMPLE.COM",
4653# },
4654# "hive_connection_2": {
4655# "hive_conf_path": "/path/to/hive/conf_2",
4656# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4657# "keytab_path": "/path/to/<filename_2>.keytab",
4658# "principal_user": "my_user/localhost@EXAMPLE.COM",
4659# }
4660# }'
4661#
4662#hive_app_configs = "{}"
4663
4664# Extra jvm args for hive connector
4665#hive_app_jvm_args = "-Xmx4g"
4666
4667# Alternative classpath for hive connector. Can be used to add additional jar files to classpath.
4668#hive_app_classpath = ""
4669
4670# extra JVM args for the Delta Table connector.
4671#delta_table_app_jvm_args = "-Xmx4g"
4672
4673# Alternative Java classpath for the Delta Table connector
4674#delta_table_app_classpath = ""
4675
4676# Replace all the downloads on the experiment page to exports and allow users to push to the artifact store configured with artifacts_store
4677#enable_artifacts_upload = false
4678
4679# Artifacts store.
4680# file_system: stores artifacts on a file system directory denoted by artifacts_file_system_directory.
4681# s3: stores artifacts to S3 bucket.
4682# bitbucket: stores data into Bitbucket repository.
4683# azure: stores data into Azure Blob Store.
4684# hdfs: stores data into a Hadoop distributed file system location.
4685#
4686#artifacts_store = "file_system"
4687
4688# Decide whether to skip cert verification for Bitbucket when using a repo with HTTPS
4689#bitbucket_skip_cert_verification = false
4690
4691# Local temporary directory to clone artifacts to, relative to data_directory
4692#bitbucket_tmp_relative_dir = "local_git_tmp"
4693
4694# File system location where artifacts will be copied in case artifacts_store is set to file_system
4695#artifacts_file_system_directory = "tmp"
4696
4697# AWS S3 bucket used for experiment artifact export.
4698#artifacts_s3_bucket = ""
4699
4700# Azure Blob Store credentials used for experiment artifact export
4701#artifacts_azure_blob_account_name = ""
4702
4703# Azure Blob Store credentials used for experiment artifact export
4704#artifacts_azure_blob_account_key =
4705
4706# Azure Blob Store connection string used for experiment artifact export
4707#artifacts_azure_connection_string =
4708
4709# Azure Blob Store SAS token used for experiment artifact export
4710#artifacts_azure_sas_token =
4711
4712# Git auth user
4713#artifacts_git_user = "git"
4714
4715# Git auth password
4716#artifacts_git_password = ""
4717
4718# Git repo where artifacts will be pushed upon and upload
4719#artifacts_git_repo = ""
4720
4721# Git branch on the remote repo where artifacts are pushed
4722#artifacts_git_branch = "dev"
4723
4724# File location for the ssh private key used for git authentication
4725#artifacts_git_ssh_private_key_file_location = ""
4726
4727# Feature Store server endpoint URL
4728#feature_store_endpoint_url = ""
4729
4730# Enable TLS communication between DAI and the Feature Store server
4731#feature_store_enable_tls = false
4732
4733# Path to the client certificate to authenticate with the Feature Store server. This is only effective when feature_store_enable_tls=True.
4734#feature_store_tls_cert_path = ""
4735
4736# A list of access token scopes used by the Feature Store connector to authenticate. (Space separate list)
4737#feature_store_access_token_scopes = ""
4738
4739# When defined, will be used as an alternative recipe implementation for the FeatureStore connector.
4740#feature_store_custom_recipe_location = ""
4741
4742# If enabled, GPT functionalities such as summarization would be available. If `openai_api_secret_key` config is provided, OpenAI API would be used. Make sure this does not break your internal policy.
4743#enable_gpt = false
4744
4745# OpenAI API secret key. Beware that if this config is set and `enable_gpt` is `true`, we will send some metadata about datasets and experiments to OpenAI (during dataset and experiment summarization). Make sure that passing such data to OpenAI does not break your internal policy.
4746#openai_api_secret_key =
4747
4748# OpenAI model to use.
4749#openai_api_model = "gpt-4"
4750
4751# h2oGPT URL endpoint that will be used for GPT-related purposes (e.g. summarization). If both `h2ogpt_url` and `openai_api_secret_key` are provided, we will use only h2oGPT URL.
4752#h2ogpt_url = ""
4753
4754# The h2oGPT Key required for specific h2oGPT URLs, enabling authorized access for GPT-related tasks like summarization.
4755#h2ogpt_key =
4756
4757# Name of the h2oGPT model that should be used. If not specified the default model in the h2oGPT will be used.
4758#h2ogpt_model_name = ""
4759
4760# Default AWS credentials to be used for scorer deployments.
4761#deployment_aws_access_key_id = ""
4762
4763# Default AWS credentials to be used for scorer deployments.
4764#deployment_aws_secret_access_key = ""
4765
4766# AWS S3 bucket to be used for scorer deployments.
4767#deployment_aws_bucket_name = ""
4768
4769# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers when performing 'Benchmark' operations for a deployment. Higher values result in more accurate performance numbers.
4770#triton_benchmark_runtime = 5
4771
4772# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers after loading up the deployment, per model. Higher values result in more accurate performance numbers.
4773#triton_quick_test_runtime = 2
4774
4775# Number of Triton deployments to show per page of the Deploy Wizard
4776#deploy_wizard_num_per_page = 10
4777
4778# Whether to allow user to change non-server toml parameters per experiment in expert page.
4779#allow_config_overrides_in_expert_page = true
4780
4781# Maximum number of columns in each head and tail to log when ingesting data or running experiment on data.
4782#max_cols_log_headtail = 1000
4783
4784# Maximum number of columns in each head and tail to show in GUI, useful when head or tail has all necessary columns, but too many for UI or web server to handle.
4785# -1 means no limit.
4786# A reasonable value is 500, after which web server or browser can become overloaded and use too much memory.
4787# Some values of column counts in UI may not show up correctly, and some dataset details functions may not work.
4788# To select (from GUI or client) any columns as being target, weight column, fold column, time column, time column groups, or dropped columns, the dataset should have those columns within the selected head or tail set of columns.
4789#max_cols_gui_headtail = 1000
4790
4791# Supported file formats (file name endings must match for files to show up in file browser)
4792#supported_file_types = "['csv', 'tsv', 'txt', 'dat', 'tgz', 'gz', 'bz2', 'zip', 'xz', 'xls', 'xlsx', 'jay', 'feather', 'bin', 'arff', 'parquet', 'pkl', 'orc', 'avro']"
4793
4794# Supported file formats of data recipe files (file name endings must match for files to show up in file browser)
4795#recipe_supported_file_types = "['py', 'pyc', 'zip']"
4796
4797# By default, only supported file types (based on the file extensions listed above) will be listed for import into DAI
4798# Some data pipelines generate parquet files without any extensions. Enabling the below option will cause files
4799# without an extension to be listed in the file import dialog.
4800# DAI will import files without extensions as parquet files; if cannot be imported, an error is generated
4801#
4802#list_files_without_extensions = false
4803
4804# Allow using browser localstorage, to improve UX.
4805#allow_localstorage = true
4806
4807# Allow original dataset columns to be present in downloaded predictions CSV
4808#allow_orig_cols_in_predictions = true
4809
4810# Allow the browser to store e.g. login credentials in login form (set to false for higher security)
4811#allow_form_autocomplete = true
4812
4813# Enable Projects workspace (alpha version, for evaluation)
4814#enable_projects = true
4815
4816# Default application language - options are 'en', 'ja', 'cn', 'ko'
4817#app_language = "en"
4818
4819# If true, Logout button is not visible in the GUI.
4820#disablelogout = false
4821
4822# Local path to the location of the Driverless AI Python Client. If empty, will download from s3
4823#python_client_path = ""
4824
4825# If disabled, server won't verify if WHL package specified in `python_client_path` is valid DAI python client. Default True
4826#python_client_verify_integrity = true
4827
4828# When enabled, new experiment requires to specify expert name
4829#gui_require_experiment_name = false
4830
4831# When disabled, Deploy option will be disabled on finished experiment page
4832#gui_enable_deploy_button = true
4833
4834# Display experiment tour
4835#enable_gui_product_tour = true
4836
4837# Whether user can download dataset as csv file
4838#enable_dataset_downloading = true
4839
4840# If enabled, user can export experiment as a Zip file
4841#enable_experiment_export = true
4842
4843# If enabled, user can import experiments, exported as Zip files from DriverlessAI
4844#enable_experiment_import = true
4845
4846# (EXPERIMENTAL) If enabled, user can launch experiment via new `Predict Wizard` options, which navigates to the new Nitro wizard.
4847#enable_experiment_wizard = true
4848
4849# (EXPERIMENTAL) If enabled, user can do joins via new `Join Wizard` options, which navigates to the new Nitro wizard.
4850#enable_join_wizard = true
4851
4852# URL address of the H2O AI link
4853#hac_link_url = "https://www.h2o.ai/freetrial/?utm_source=dai&ref=dai"
4854
4855#show_all_filesystems = false
4856
4857# Switches Driverless AI to use H2O.ai License Management Server to manage licenses/permission to use software
4858#enable_license_manager = false
4859
4860# Address at which to communicate with H2O.ai License Management Server.
4861# Requires above value, `enable_license_manager` set to True.
4862# Format: {http/https}://{ip address}:{port number}
4863#
4864#license_manager_address = "http://127.0.0.1:9999"
4865
4866# Name of license manager project that Driverless AI will attempt to retrieve leases from.
4867# NOTE: requires an active license within the License Manager Server to function properly
4868#
4869#license_manager_project_name = "default"
4870
4871# Number of milliseconds a lease for users will be expected to last,
4872# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4873# Default: 3600000 (1 hour) = 1 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4874#
4875#license_manager_lease_duration = 3600000
4876
4877# Number of milliseconds a lease for Driverless AI worker nodes will be expected to last,
4878# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4879# Default: 21600000 (6 hour) = 6 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4880#
4881#license_manager_worker_lease_duration = 21600000
4882
4883# To be used only if License Manager server is started with HTTPS
4884# Accepts a boolean: true/false, or a path to a file/directory. Denotates whether or not to attempt
4885# SSL Certificate verification when making a request to the License Manager server.
4886# True: attempt ssl certificate verification, will fail if certificates are self signed
4887# False: skip ssl certificate verification.
4888# /path/to/cert/directory: load certificates <cert.pem> in directory and use those for certificate verification
4889# Behaves in the same manner as python requests package:
4890# https://requests.readthedocs.io/en/latest/user/advanced/#ssl-cert-verification
4891#
4892#license_manager_ssl_certs = "true"
4893
4894# Amount of time that Driverless AI workers will keep retrying to startup and obtain a lease from
4895# the license manager before timing out. Time out will cause worker startup to fail.
4896#
4897#license_manager_worker_startup_timeout = 3600000
4898
4899# Emergency setting that will allow Driverless AI to run even if there is issues communicating with
4900# or obtaining leases from, the License Manager server.
4901# This is an encoded string that can be obtained from either the license manager ui or the logs of the license
4902# manager server.
4903#
4904#license_manager_dry_run_token = ""
4905
4906# Choose LIME method to be used for creation of surrogate models.
4907#mli_lime_method = "k-LIME"
4908
4909# Choose whether surrogate models should be built for original or transformed features.
4910#mli_use_raw_features = true
4911
4912# Choose whether time series based surrogate models should be built for original features.
4913#mli_ts_use_raw_features = false
4914
4915# Choose whether to run all explainers on the sampled dataset.
4916#mli_sample = true
4917
4918# Set maximum number of features for which to build Surrogate Partial Dependence Plot. Use -1 to calculate Surrogate Partial Dependence Plot for all features.
4919#mli_vars_to_pdp = 10
4920
4921# Set the number of cross-validation folds for surrogate models.
4922#mli_nfolds = 3
4923
4924# Set the number of columns to bin in case of quantile binning.
4925#mli_qbin_count = 0
4926
4927# Number of threads for H2O instance for use by MLI.
4928#h2o_mli_nthreads = 8
4929
4930# Use this option to disable MOJO scoring pipeline. Scoring pipeline is chosen automatically (from MOJO and Python pipelines) by default. In case of certain models MOJO vs. Python choice can impact pipeline performance and robustness.
4931#mli_enable_mojo_scorer = true
4932
4933# When number of rows are above this limit sample for MLI for scoring UI data.
4934#mli_sample_above_for_scoring = 1000000
4935
4936# When number of rows are above this limit sample for MLI for training surrogate models.
4937#mli_sample_above_for_training = 100000
4938
4939# The sample size, number of rows, used for MLI surrogate models.
4940#mli_sample_size = 100000
4941
4942# Number of bins for quantile binning.
4943#mli_num_quantiles = 10
4944
4945# Number of trees for Random Forest surrogate model.
4946#mli_drf_num_trees = 100
4947
4948# Speed up predictions with a fast approximation (can reduce the number of trees or cross-validation folds).
4949#mli_fast_approx = true
4950
4951# Maximum number of interpreters status cache entries.
4952#mli_interpreter_status_cache_size = 1000
4953
4954# Max depth for Random Forest surrogate model.
4955#mli_drf_max_depth = 20
4956
4957# not only sample training, but also sample scoring.
4958#mli_sample_training = true
4959
4960# Regularization strength for k-LIME GLM's.
4961#klime_lambda = "[1e-06, 1e-08]"
4962
4963# Regularization distribution between L1 and L2 for k-LIME GLM's.
4964#klime_alpha = 0.0
4965
4966# Max cardinality for numeric variables in surrogate models to be considered categorical.
4967#mli_max_numeric_enum_cardinality = 25
4968
4969# Maximum number of features allowed for k-LIME k-means clustering.
4970#mli_max_number_cluster_vars = 6
4971
4972# Use all columns for k-LIME k-means clustering (this will override `mli_max_number_cluster_vars` if set to `True`).
4973#use_all_columns_klime_kmeans = false
4974
4975# Strict version check for MLI
4976#mli_strict_version_check = true
4977
4978# MLI cloud name
4979#mli_cloud_name = ""
4980
4981# Compute original model ICE using per feature's bin predictions (true) or use "one frame" strategy (false).
4982#mli_ice_per_bin_strategy = false
4983
4984# By default DIA will run for categorical columns with cardinality <= mli_dia_default_max_cardinality.
4985#mli_dia_default_max_cardinality = 10
4986
4987# By default DIA will run for categorical columns with cardinality >= mli_dia_default_min_cardinality.
4988#mli_dia_default_min_cardinality = 2
4989
4990# When number of rows are above this limit, then sample for MLI transformed Shapley calculation.
4991#mli_shapley_sample_size = 100000
4992
4993# Enable MLI keeper which ensures efficient use of filesystem/memory/DB by MLI.
4994#enable_mli_keeper = true
4995
4996# Enable MLI Sensitivity Analysis
4997#enable_mli_sa = true
4998
4999# Enable priority queues based explainers execution. Priority queues restrict available system resources and prevent system over-utilization. Interpretation execution time might be (significantly) slower.
5000#enable_mli_priority_queues = true
5001
5002# Explainers are run sequentially by default. This option can be used to run all explainers in parallel which can - depending on hardware strength and the number of explainers - decrease interpretation duration. Consider explainer dependencies, random explainers order and hardware over utilization.
5003#mli_sequential_task_execution = true
5004
5005# When number of rows are above this limit, then sample for Disparate Impact Analysis.
5006#mli_dia_sample_size = 100000
5007
5008# When number of rows are above this limit, then sample for Partial Dependence Plot.
5009#mli_pd_sample_size = 25000
5010
5011# Use dynamic switching between Partial Dependence Plot numeric and categorical binning and UI chart selection in case of features which were used both as numeric and categorical by experiment.
5012#mli_pd_numcat_num_chart = true
5013
5014# If 'mli_pd_numcat_num_chart' is enabled, then use numeric binning and chart if feature unique values count is bigger than threshold, else use categorical binning and chart.
5015#mli_pd_numcat_threshold = 11
5016
5017# In New Interpretation screen show only datasets which can be used to explain a selected model. This can slow down the server significantly.
5018#new_mli_list_only_explainable_datasets = false
5019
5020# Enable async/await-based non-blocking MLI API
5021#enable_mli_async_api = true
5022
5023# Enable main chart aggregator in Sensitivity Analysis
5024#enable_mli_sa_main_chart_aggregator = true
5025
5026# When to sample for Sensitivity Analysis (number of rows after sampling).
5027#mli_sa_sampling_limit = 500000
5028
5029# Run main chart aggregator in Sensitivity Analysis when the number of dataset instances is bigger than given limit.
5030#mli_sa_main_chart_aggregator_limit = 1000
5031
5032# Use predict_safe() (true) or predict_base() (false) in MLI (PD, ICE, SA, ...).
5033#mli_predict_safe = false
5034
5035# Number of max retries should the surrogate model fail to build.
5036#mli_max_surrogate_retries = 5
5037
5038# Allow use of symlinks (instead of file copy) by MLI explainer procedures.
5039#enable_mli_symlinks = true
5040
5041# Fraction of memory to allocate for h2o MLI jar
5042#h2o_mli_fraction_memory = 0.45
5043
5044# Add TOML string to Driverless AI server config.toml configuration file.
5045#mli_custom = ""
5046
5047# To exclude e.g. Sensitivity Analysis explainer use: excluded_mli_explainers=['h2oaicore.mli.byor.recipes.sa_explainer.SaExplainer'].
5048#excluded_mli_explainers = "[]"
5049
5050# Enable RPC API performance monitor.
5051#enable_ws_perfmon = false
5052
5053# Number of parallel workers when scoring using MOJO in Kernel Explainer.
5054#mli_kernel_explainer_workers = 4
5055
5056# Use Kernel Explainer to obtain Shapley values for original features.
5057#mli_run_kernel_explainer = false
5058
5059# Sample input dataset for Kernel Explainer.
5060#mli_kernel_explainer_sample = true
5061
5062# Sample size for input dataset passed to Kernel Explainer.
5063#mli_kernel_explainer_sample_size = 1000
5064
5065# 'auto' or int. Number of times to re-evaluate the model when explaining each prediction. More samples lead to lower variance estimates of the SHAP values. The 'auto' setting uses nsamples = 2 * X.shape[1] + 2048. This setting is disabled by default and DAI determines the right number internally.
5066#mli_kernel_explainer_nsamples = "auto"
5067
5068# 'num_features(int)', 'auto' (default for now, but deprecated), 'aic', 'bic', or float. The l1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The 'auto' option currently uses aic when less that 20% of the possible sample space is enumerated, otherwise it uses no regularization. THE BEHAVIOR OF 'auto' WILL CHANGE in a future version to be based on 'num_features' instead of AIC. The aic and bic options use the AIC and BIC rules for regularization. Using 'num_features(int)' selects a fix number of top features. Passing a float directly sets the alpha parameter of the sklearn.linear_model.Lasso model used for feature selection.
5069#mli_kernel_explainer_l1_reg = "aic"
5070
5071# Max runtime for Kernel Explainer in seconds. Default is 900, which equates to 15 minutes. Setting this parameter to -1 means to honor the Kernel Shapley sample size provided regardless of max runtime.
5072#mli_kernel_explainer_max_runtime = 900
5073
5074# Tokenizer used to extract tokens from text columns for MLI.
5075#mli_nlp_tokenizer = "tfidf"
5076
5077# Number of tokens used for MLI NLP explanations. -1 means all.
5078#mli_nlp_top_n = 20
5079
5080# Maximum number of records used by MLI NLP explainers.
5081#mli_nlp_sample_limit = 10000
5082
5083# Minimum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
5084#mli_nlp_min_df = 3
5085
5086# Maximum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
5087#mli_nlp_max_df = 0.9
5088
5089# The minimum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
5090#mli_nlp_min_ngram = 1
5091
5092# The maximum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
5093#mli_nlp_max_ngram = 1
5094
5095# Mode used to choose N tokens for MLI NLP.
5096# "top" chooses N top tokens.
5097# "bottom" chooses N bottom tokens.
5098# "top-bottom" chooses math.floor(N/2) top and math.ceil(N/2) bottom tokens.
5099# "linspace" chooses N evenly spaced out tokens.
5100#mli_nlp_min_token_mode = "top"
5101
5102# The number of top tokens to be used as features when building token based feature importance.
5103#mli_nlp_tokenizer_max_features = -1
5104
5105# The number of top tokens to be used as features when computing text LOCO.
5106#mli_nlp_loco_max_features = -1
5107
5108# The tokenizer method to use when tokenizing a dataset for surrogate models. Can either choose 'TF-IDF' or 'Linear Model + TF-IDF', which first runs TF-IDF to get tokens and then fits a linear model between the tokens and the target to get importances of tokens, which are based on coefficients of the linear model. Default is 'Linear Model + TF-IDF'. Only applies to NLP models.
5109#mli_nlp_surrogate_tokenizer = "Linear Model + TF-IDF"
5110
5111# The number of top tokens to be used as features when building surrogate models. Only applies to NLP models.
5112#mli_nlp_surrogate_tokens = 100
5113
5114# Ignore stop words for MLI NLP.
5115#mli_nlp_use_stop_words = true
5116
5117# List of words to filter out before generation of text tokens, which are passed to MLI NLP LOCO and surrogate models (if enabled). Default is 'english'. Pass in custom stop-words as a list, e.g., ['great', 'good'].
5118#mli_nlp_stop_words = "english"
5119
5120# Append passed in list of custom stop words to default 'english' stop words.
5121#mli_nlp_append_to_english_stop_words = false
5122
5123# Enable MLI for image experiments.
5124#mli_image_enable = true
5125
5126# The maximum number of rows allowed to get the local explanation result, increase the value may jeopardize overall performance, change the value only if necessary.
5127#mli_max_explain_rows = 500
5128
5129# The maximum number of rows allowed to get the NLP token importance result, increasing the value may consume too much memory and negatively impact the performance, change the value only if necessary.
5130#mli_nlp_max_tokens_rows = 50
5131
5132# The minimum number of rows to enable parallel execution for NLP local explanations calculation.
5133#mli_nlp_min_parallel_rows = 10
5134
5135# Run legacy defaults in addition to current default explainers in MLI.
5136#mli_run_legacy_defaults = false
5137
5138# Run explainers sequentially for one given MLI job.
5139#mli_run_explainers_sequentially = false
5140
5141# Set dask CUDA/RAPIDS cluster settings for single node workers.
5142# Additional environment variables can be set, see: https://dask-cuda.readthedocs.io/en/latest/ucx.html#dask-scheduler
5143# e.g. for ucx use: {} dict version of: dict(n_workers=None, threads_per_worker=1, processes=True, memory_limit='auto', device_memory_limit=None, CUDA_VISIBLE_DEVICES=None, data=None, local_directory=None, protocol='ucx', enable_tcp_over_ucx=True, enable_infiniband=False, enable_nvlink=False, enable_rdmacm=False, ucx_net_devices='auto', rmm_pool_size='1GB')
5144# WARNING: Do not add arguments like {'n_workers': 1, 'processes': True, 'threads_per_worker': 1} this will lead to hangs, cuda cluster handles this itself.
5145#
5146#dask_cuda_cluster_kwargs = "{'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5147
5148# Set dask cluster settings for single node workers.
5149#
5150#dask_cluster_kwargs = "{'n_workers': 1, 'processes': True, 'threads_per_worker': 1, 'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5151
5152# Whether to start dask workers on this multinode worker.
5153#
5154#start_dask_worker = true
5155
5156# Set dask scheduler env.
5157# See https://docs.dask.org/en/latest/setup/cli.html
5158#
5159#dask_scheduler_env = "{}"
5160
5161# Set dask scheduler env.
5162# See https://docs.dask.org/en/latest/setup/cli.html
5163#
5164#dask_cuda_scheduler_env = "{}"
5165
5166# Set dask scheduler options.
5167# See https://docs.dask.org/en/latest/setup/cli.html
5168#
5169#dask_scheduler_options = ""
5170
5171# Set dask cuda scheduler options.
5172# See https://docs.dask.org/en/latest/setup/cli.html
5173#
5174#dask_cuda_scheduler_options = ""
5175
5176# Set dask worker env.
5177# See https://docs.dask.org/en/latest/setup/cli.html
5178#
5179#dask_worker_env = "{'NCCL_P2P_DISABLE': '1', 'NCCL_DEBUG': 'WARN'}"
5180
5181# Set dask worker options.
5182# See https://docs.dask.org/en/latest/setup/cli.html
5183#
5184#dask_worker_options = "--memory-limit 0.95"
5185
5186# Set dask cuda worker options.
5187# Similar options as dask_cuda_cluster_kwargs.
5188# See https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5189# "--rmm-pool-size 1GB" can be set to give 1GB to RMM for more efficient rapids
5190#
5191#dask_cuda_worker_options = "--memory-limit 0.95"
5192
5193# Set dask cuda worker env.
5194# See: https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5195# https://ucx-py.readthedocs.io/en/latest/dask.html
5196#
5197#dask_cuda_worker_env = "{}"
5198
5199# See https://docs.dask.org/en/latest/setup/cli.html
5200# e.g. ucx is optimal, while tcp is most reliable
5201#
5202#dask_protocol = "tcp"
5203
5204# See https://docs.dask.org/en/latest/setup/cli.html
5205#
5206#dask_server_port = 8786
5207
5208# See https://docs.dask.org/en/latest/setup/cli.html
5209#
5210#dask_dashboard_port = 8787
5211
5212# See https://docs.dask.org/en/latest/setup/cli.html
5213# e.g. ucx is optimal, while tcp is most reliable
5214#
5215#dask_cuda_protocol = "tcp"
5216
5217# See https://docs.dask.org/en/latest/setup/cli.html
5218# port + 1 is used for dask dashboard
5219#
5220#dask_cuda_server_port = 8790
5221
5222# See https://docs.dask.org/en/latest/setup/cli.html
5223#
5224#dask_cuda_dashboard_port = 8791
5225
5226# If empty string, auto-detect IP capable of reaching network.
5227# Required to be set if using worker_mode=multinode.
5228#
5229#dask_server_ip = ""
5230
5231# Number of processses per dask (not cuda-GPU) worker.
5232# If -1, uses dask default of cpu count + 1 + nprocs.
5233# If -2, uses DAI default of total number of physical cores. Recommended for heavy feature engineering.
5234# If 1, assumes tasks are mostly multi-threaded and can use entire node per task. Recommended for heavy multinode model training.
5235# Only applicable to dask (not dask_cuda) workers
5236#
5237#dask_worker_nprocs = 1
5238
5239# Number of threads per process for dask workers
5240#dask_worker_nthreads = 1
5241
5242# Number of threads per process for dask_cuda workers
5243# If -2, uses DAI default of physical cores per GPU,
5244# since must have 1 worker/GPU only.
5245#
5246#dask_cuda_worker_nthreads = -2
5247
5248# See https://github.com/dask/dask-lightgbm
5249#
5250#lightgbm_listen_port = 12400
5251
5252# Whether to enable jupyter server
5253#enable_jupyter_server = false
5254
5255# Port for jupyter server
5256#jupyter_server_port = 8889
5257
5258# Whether to enable jupyter server browser
5259#enable_jupyter_server_browser = false
5260
5261# Whether to root access to jupyter server browser
5262#enable_jupyter_server_browser_root = false
5263
5264# Hostname (or IP address) of remote Triton inference service (outside of DAI), to be used when auto_deploy_triton_scoring_pipeline
5265# and make_triton_scoring_pipeline are not disabled. If set, check triton_model_repository_dir_remote and triton_server_params_remote as well.
5266#
5267#triton_host_remote = ""
5268
5269# Path to model repository directory for remote Triton inference server outside of Driverless AI. All Triton deployments for all users are stored in this directory. Requires write access to this directory from Driverless AI (shared file system). This setting is optional. If not provided, will upload each model deployment over gRPC protocol.
5270#triton_model_repository_dir_remote = ""
5271
5272# Parameters to connect to remote Triton server, only used if triton_host_remote and
5273# triton_model_repository_dir_remote are set.
5274# Note: 'model-control-mode' need to be set to 'explicit' in order to allow DAI upload model to remote
5275# triton server.
5276# .
5277#triton_server_params_remote = "{'http-port': 8000, 'grpc-port': 8001, 'metrics-port': 8002, 'model-control-mode': 'explicit'}"
5278
5279#triton_log_level = 0
5280
5281#triton_model_reload_on_startup_count = 0
5282
5283#triton_clean_up_temp_python_env_on_startup = true
5284
5285# When set to true, CPU executors will strictly run just CPU tasks.
5286#multinode_enable_strict_queue_policy = false
5287
5288# Controls whether CPU tasks can run on GPU machines.
5289#multinode_enable_cpu_tasks_on_gpu_machines = true
5290
5291# Storage medium to be used to exchange data between main server and remote worker nodes.
5292#multinode_storage_medium = "minio"
5293
5294# How the long running tasks are scheduled.
5295# multiprocessing: forks the current process immediately.
5296# singlenode: shares the task through redis and needs a worker running.
5297# multinode: same as singlenode and also shares the data through minio
5298# and allows worker to run on the different machine.
5299#
5300#worker_mode = "singlenode"
5301
5302# Redis settings
5303#redis_ip = "127.0.0.1"
5304
5305# Redis settings
5306#redis_port = 6379
5307
5308# Redis database. Each DAI instance running on the redis server should have unique integer.
5309#redis_db = 0
5310
5311# Redis password. Will be randomly generated main server startup, and by default it will show up in config file uncommented.If you are running more than one DriverlessAI instance per system, make sure each and every instance is connected to its own redis queue.
5312#main_server_redis_password = "PlWUjvEJSiWu9j0aopOyL5KwqnrKtyWVoZHunqxr"
5313
5314# If set to true, the config will get encrypted before it gets saved into the Redis database.
5315#redis_encrypt_config = false
5316
5317# The port that Minio will listen on, this only takes effect if the current system is a multinode main server.
5318#local_minio_port = 9001
5319
5320# Location of main server's minio server.
5321#main_server_minio_address = "127.0.0.1:9001"
5322
5323# Access key of main server's minio server.
5324#main_server_minio_access_key_id = "GMCSE2K2T3RV6YEHJUYW"
5325
5326# Secret access key of main server's minio server.
5327#main_server_minio_secret_access_key = "JFxmXvE/W1AaqwgyPxAUFsJZRnDWUaeQciZJUe9H"
5328
5329# Name of minio bucket used for file synchronization.
5330#main_server_minio_bucket = "h2oai"
5331
5332# S3 global access key.
5333#main_server_s3_access_key_id = "access_key"
5334
5335# S3 global secret access key
5336#main_server_s3_secret_access_key = "secret_access_key"
5337
5338# S3 bucket.
5339#main_server_s3_bucket = "h2oai-multinode-tests"
5340
5341# Maximum number of local tasks processed at once, limited to no more than total number of physical (not virtual) cores divided by two (minimum of 1).
5342#worker_local_processors = 32
5343
5344# A concurrency limit for the 3 priority queues, only enabled when worker_remote_processors is greater than 0.
5345#worker_priority_queues_processors = 4
5346
5347# A timeout before which a scheduled task is bumped up in priority
5348#worker_priority_queues_time_check = 30
5349
5350# Maximum number of remote tasks processed at once, if value is set to -1 the system will automatically pick a reasonable limit depending on the number of available virtual CPU cores.
5351#worker_remote_processors = -1
5352
5353# If worker_remote_processors >= 3, factor by which each task reduces threads, used by various packages like datatable, lightgbm, xgboost, etc.
5354#worker_remote_processors_max_threads_reduction_factor = 0.7
5355
5356# Temporary file system location for multinode data transfer. This has to be an absolute path with equivalent configuration on both the main server and remote workers.
5357#multinode_tmpfs = ""
5358
5359# When set to true, will use the 'multinode_tmpfs' as datasets store.
5360#multinode_store_datasets_in_tmpfs = false
5361
5362# How often the server should extract results from redis queue in milliseconds.
5363#redis_result_queue_polling_interval = 100
5364
5365# Sleep time for worker loop.
5366#worker_sleep = 0.1
5367
5368# For how many seconds worker should wait for main server minio bucket before it fails
5369#main_server_minio_bucket_ping_timeout = 180
5370
5371# A JSON list of up to two objects, where each object defines a worker node profile with name, num_cpus, num_gpus, memory_gb, gpu_is_mig. Currently, the profiles must be named CPU and GPU. The GPU profile must have num_gpus greater than 0. An example worker_spec: [{"name": "CPU", "num_cpus": 8, "num_gpus": 2, "memory_gb": 32, "gpu_is_mig": true}].
5372#worker_node_spec = ""
5373
5374# How long the worker should wait on redis db initialization in seconds.
5375#worker_start_timeout = 30
5376
5377#worker_no_main_server_wait_time = 1800
5378
5379#worker_no_main_server_wait_time_with_hard_assert = 30
5380
5381# For how many seconds the worker shouldn't respond to be marked unhealthy.
5382#worker_healthy_response_period = 300
5383
5384# Whether to enable priority queue for worker nodes to schedule experiments.
5385#
5386#enable_experiments_priority_queue = false
5387
5388# Exposes the DriverlessAI base version when enabled.
5389#expose_server_version = true
5390
5391# https settings
5392# You can make a self-signed certificate for testing with the following commands:
5393# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5394# sudo chown dai:dai cert.pem private_key.pem
5395# sudo chmod 600 cert.pem private_key.pem
5396# sudo mv cert.pem private_key.pem /etc/dai
5397#enable_https = false
5398
5399# https settings
5400# You can make a self-signed certificate for testing with the following commands:
5401# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5402# sudo chown dai:dai cert.pem private_key.pem
5403# sudo chmod 600 cert.pem private_key.pem
5404# sudo mv cert.pem private_key.pem /etc/dai
5405#ssl_key_file = "/etc/dai/private_key.pem"
5406
5407# https settings
5408# You can make a self-signed certificate for testing with the following commands:
5409# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5410# sudo chown dai:dai cert.pem private_key.pem
5411# sudo chmod 600 cert.pem private_key.pem
5412# sudo mv cert.pem private_key.pem /etc/dai
5413#ssl_crt_file = "/etc/dai/cert.pem"
5414
5415# https settings
5416# Passphrase for the ssl_key_file,
5417# either use this setting or ssl_key_passphrase_file,
5418# or neither if no passphrase is used.
5419#ssl_key_passphrase = ""
5420
5421# https settings
5422# Passphrase file for the ssl_key_file,
5423# either use this setting or ssl_key_passphrase,
5424# or neither if no passphrase is used.
5425#ssl_key_passphrase_file = ""
5426
5427# SSL TLS
5428#ssl_no_sslv2 = true
5429
5430# SSL TLS
5431#ssl_no_sslv3 = true
5432
5433# SSL TLS
5434#ssl_no_tlsv1 = true
5435
5436# SSL TLS
5437#ssl_no_tlsv1_1 = true
5438
5439# SSL TLS
5440#ssl_no_tlsv1_2 = false
5441
5442# SSL TLS
5443#ssl_no_tlsv1_3 = false
5444
5445# https settings
5446# Sets the client verification mode.
5447# CERT_NONE: Client does not need to provide the certificate and if it does any
5448# verification errors are ignored.
5449# CERT_OPTIONAL: Client does not need to provide the certificate and if it does
5450# certificate is verified against set up CA chains.
5451# CERT_REQUIRED: Client needs to provide a certificate and certificate is
5452# verified.
5453# You'll need to set 'ssl_client_key_file' and 'ssl_client_crt_file'
5454# When this mode is selected for Driverless to be able to verify
5455# it's own callback requests.
5456#
5457#ssl_client_verify_mode = "CERT_NONE"
5458
5459# https settings
5460# Path to the Certification Authority certificate file. This certificate will be
5461# used when to verify client certificate when client authentication is turned on.
5462# If this is not set, clients are verified using default system certificates.
5463#
5464#ssl_ca_file = ""
5465
5466# https settings
5467# path to the private key that Driverless will use to authenticate itself when
5468# CERT_REQUIRED mode is set.
5469#
5470#ssl_client_key_file = ""
5471
5472# https settings
5473# path to the client certificate that Driverless will use to authenticate itself
5474# when CERT_REQUIRED mode is set.
5475#
5476#ssl_client_crt_file = ""
5477
5478# If enabled, webserver will serve xsrf cookies and verify their validity upon every POST request
5479#enable_xsrf_protection = true
5480
5481# Sets the `SameSite` attribute for the `_xsrf` cookie; options are "Lax", "Strict", or "".
5482#xsrf_cookie_samesite = ""
5483
5484#enable_secure_cookies = false
5485
5486# When enabled each authenticated access will be verified comparing IP address of initiator of session and current request IP
5487#verify_session_ip = false
5488
5489# Enables automatic detection for forbidden/dangerous constructs in custom recipe
5490#custom_recipe_security_analysis_enabled = false
5491
5492# List of modules that can be imported in custom recipes. Default empty list means all modules are allowed except for banlisted ones
5493#custom_recipe_import_allowlist = "[]"
5494
5495# List of modules that cannot be imported in custom recipes
5496#custom_recipe_import_banlist = "['shlex', 'plumbum', 'pexpect', 'envoy', 'commands', 'fabric', 'subprocess', 'os.system', 'system']"
5497
5498# Regex pattern list of calls which are allowed in custom recipes.
5499# Empty list means everything (except for banlist) is allowed.
5500# E.g. if only `os.path.*` is in allowlist, custom recipe can only call methods
5501# from `os.path` module and the built in ones
5502#
5503#custom_recipe_method_call_allowlist = "[]"
5504
5505# Regex pattern list of calls which need to be rejected in custom recipes.
5506# E.g. if `os.system` in banlist, custom recipe cannot call `os.system()`.
5507# If `socket.*` in banlist, recipe cannot call any method of socket module such as
5508# `socket.socket()` or any `socket.a.b.c()`
5509#
5510#custom_recipe_method_call_banlist = "['os\\.system', 'socket\\..*', 'subprocess.*', 'os.spawn.*']"
5511
5512# List of regex patterns representing dangerous sequences/constructs
5513# which could be harmful to whole system and should be banned from code
5514#
5515#custom_recipe_dangerous_patterns = "['rm -rf', 'rm -fr']"
5516
5517# If enabled, user can log in from 2 browsers (scripts) at the same time
5518#allow_concurrent_sessions = true
5519
5520# Extra HTTP headers.
5521#extra_http_headers = "{}"
5522
5523# If enabled, the webserver will add a Content-Security-Policy header to all responses. This header helps to prevent cross-site scripting (XSS) attacks by specifying which sources of content are allowed to be loaded by the browser.
5524#add_csp_header = true
5525
5526# By default DriverlessAI issues cookies with HTTPOnly and Secure attributes (morsels) enabled. In addition to that, SameSite attribute is set to 'Lax', as it's a default in modern browsers. The config overrides the default key/value (morsels).
5527#http_cookie_attributes = "{'samesite': 'Lax'}"
5528
5529# Enable column imputation
5530#enable_imputation = false
5531
5532# Adds advanced settings panel to experiment setup, which allows creating
5533# custom features and more.
5534#
5535#enable_advanced_features_experiment = false
5536
5537# Specifies whether DriverlessAI uses H2O Storage or H2O Entity Server for
5538# a shared entities backend.
5539# h2o-storage: Uses legacy H2O Storage.
5540# entity-server: Uses the new HAIC Entity Server.
5541#
5542#h2o_storage_mode = "h2o-storage"
5543
5544# Address of the H2O Storage endpoint. Keep empty to use the local storage only.
5545#h2o_storage_address = ""
5546
5547# Whether to enable multi-project support in H2O Storage.
5548#enable_multi_projects = false
5549
5550# Whether to use remote projects stored in H2O Storage instead of local projects.
5551#h2o_storage_projects_enabled = false
5552
5553# Whether the channel to the storage should be encrypted.
5554#h2o_storage_tls_enabled = true
5555
5556# Path to the certification authority certificate that H2O Storage server identity will be checked against.
5557#h2o_storage_tls_ca_path = ""
5558
5559# Path to the client certificate to authenticate with H2O Storage server
5560#h2o_storage_tls_cert_path = ""
5561
5562# Path to the client key to authenticate with H2O Storage server
5563#h2o_storage_tls_key_path = ""
5564
5565# UUID of a Storage project to use instead of the remote HOME folder.
5566#h2o_storage_internal_default_project_id = ""
5567
5568# Deadline for RPC calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it.
5569#h2o_storage_rpc_deadline_seconds = 60
5570
5571# Deadline for RPC bytestrteam calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it. This value is used for uploading and downloading artifacts.
5572#h2o_storage_rpc_bytestream_deadline_seconds = 7200
5573
5574# Storage client manages it's own access tokens derived from the refresh token received on the user login. When this option is set access token with the scopes defined here is requested. (space separated list)
5575#h2o_storage_oauth2_scopes = ""
5576
5577# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5578#h2o_storage_message_size_limit = 1048576000
5579
5580# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5581#h2o_authz_message_size_limit = 1048576000
5582
5583# If the `h2o_mlops_ui_url` is provided alongside the `enable_storage`, DAI is able to redirect user to the MLOps app upon clicking the Deploy button.
5584#h2o_mlops_ui_url = ""
5585
5586# If the `feature_store_ui_url` is provided alongside the `enable_file_systems`, DAI is able to redirect user to the Feature Store app upon clicking the Feature Store button.
5587#feature_store_ui_url = ""
5588
5589# H2O Secure Store server endpoint URL
5590#h2o_secure_store_endpoint_url = ""
5591
5592# Enable TLS communication between DAI and the H2O Secure Store server
5593#h2o_secure_store_enable_tls = true
5594
5595# Path to the client certificate to authenticate with the H2O Secure Store server. This is only effective when h2o_secure_store_enable_tls=True.
5596#h2o_secure_store_tls_cert_path = ""
5597
5598# Whether to enable or disable linking datasets into projects.
5599#h2o_storage_dataset_linking_enabled = true
5600
5601# Whether to enable or disable linking experiments into projects.
5602#h2o_storage_experiment_linking_enabled = true
5603
5604# Keystore file that contains secure config.toml items like passwords, secret keys etc. Keystore is managed by h2oai.keystore tool.
5605#keystore_file = ""
5606
5607# Verbosity of logging
5608# 0: quiet (CRITICAL, ERROR, WARNING)
5609# 1: default (CRITICAL, ERROR, WARNING, INFO, DATA)
5610# 2: verbose (CRITICAL, ERROR, WARNING, INFO, DATA, DEBUG)
5611# Affects server and all experiments
5612#log_level = 1
5613
5614# Whether to collect relevant server logs (h2oai_server.log, dai.log from systemctl or docker, and h2o log)
5615# Useful for when sending logs to H2O.ai
5616#collect_server_logs_in_experiment_logs = false
5617
5618# When set, will migrate all user entities to the defined user upon startup, this is mostly useful during
5619# instance migration via H2O's AIEM/Steam.
5620#migrate_all_entities_to_user = ""
5621
5622# Whether to have all user content isolated into a directory for each user.
5623# If set to False, all users content is common to single directory,
5624# recipes are shared, and brain folder for restart/refit is shared.
5625# If set to True, each user has separate folder for all user tasks,
5626# recipes are isolated to each user, and brain folder for restart/refit is
5627# only for the specific user.
5628# Migration from False to True or back to False is allowed for
5629# all experiment content accessible by GUI or python client,
5630# all recipes, and starting experiment with same settings, restart, or refit.
5631# However, if switch to per-user mode, the common brain folder is no longer used.
5632#
5633#per_user_directories = true
5634
5635# List of file names to ignore during dataset import. Any files with names listed above will be skipped when
5636# DAI creates a dataset. Example, directory contains 3 files: [data_1.csv, data_2.csv, _SUCCESS]
5637# DAI will only attempt to create a dataset using files data_1.csv and data_2.csv, and _SUCCESS file will be ignored.
5638# Default is to ignore _SUCCESS files which are commonly created in exporting data from Hadoop
5639#
5640#data_import_ignore_file_names = "['_SUCCESS']"
5641
5642# For data import from a directory (multiple files), allow column types to differ and perform upcast during import.
5643#data_import_upcast_multi_file = false
5644
5645# If set to true, will explode columns with list data type when importing parquet files.
5646#data_import_explode_list_type_columns_in_parquet = false
5647
5648# List of file types that Driverless AI should attempt to import data as IF no file extension exists in the file name
5649# If no file extension is provided, Driverless AI will attempt to import the data starting with first type
5650# in the defined list. Default ["parquet", "orc"]
5651# Example: 'test.csv' (file extension exists) vs 'test' (file extension DOES NOT exist)
5652# NOTE: see supported_file_types configuration option for more details on supported file types
5653#
5654#files_without_extensions_expected_types = "['parquet', 'orc']"
5655
5656# do_not_log_list : add configurations that you do not wish to be recorded in logs here.They will still be stored in experiment information so child experiments can behave consistently.
5657#do_not_log_list = "['cols_to_drop', 'cols_to_drop_sanitized', 'cols_to_group_by', 'cols_to_group_by_sanitized', 'cols_to_force_in', 'cols_to_force_in_sanitized', 'do_not_log_list', 'do_not_store_list', 'pytorch_nlp_pretrained_s3_access_key_id', 'pytorch_nlp_pretrained_s3_secret_access_key', 'auth_openid_end_session_endpoint_url']"
5658
5659# do_not_store_list : add configurations that you do not wish to be stored at all here.Will not be remembered across experiments, so not applicable to data science related itemsthat could be controlled by a user. These items are automatically not logged.
5660#do_not_store_list = "['h2o_authz_action_prefix', 'h2o_authz_user_prefix', 'h2o_authz_result_cache_ttl_sec', 'pip_install_options', 'local_default_project_key']"
5661
5662# Memory limit in bytes for datatable to use during parsing of CSV files. -1 for unlimited. 0 for automatic. >0 for constraint.
5663#datatable_parse_max_memory_bytes = -1
5664
5665# Delimiter/Separator to use when parsing tabular text files like CSV. Automatic if empty. Must be provided at system start.
5666#datatable_separator = ""
5667
5668# Whether to enable ping of system status during DAI data ingestion.
5669#ping_load_data_file = false
5670
5671# Period between checking DAI status. Should be small enough to avoid slowing parent who stops ping process.
5672#ping_sleep_period = 0.5
5673
5674# Precision of how data is stored
5675# 'datatable' keeps original datatable storage types (i.e. bool, int, float32, float64) (experimental)
5676# 'float32' best for speed, 'float64' best for accuracy or very large input values, "datatable" best for memory
5677# 'float32' allows numbers up to about +-3E38 with relative error of about 1E-7
5678# 'float64' allows numbers up to about +-1E308 with relative error of about 1E-16
5679# Some calculations, like the GLM standardization, can only handle up to sqrt() of these maximums for data values,
5680# So GLM with 32-bit precision can only handle up to about a value of 1E19 before standardization generates inf values.
5681# If you see "Best individual has invalid score" you may require higher precision.
5682#data_precision = "float32"
5683
5684# Precision of most data transformers (same options and notes as data_precision).
5685# Useful for higher precision in transformers with numerous operations that can accumulate error.
5686# Also useful if want faster performance for transformers but otherwise want data stored in high precision.
5687#transformer_precision = "float32"
5688
5689# Whether to change ulimit soft limits up to hard limits (for DAI server app, which is not a generic user app).
5690# Prevents resource limit problems in some cases.
5691# Restricted to no more than limit_nofile and limit_nproc for those resources.
5692#ulimit_up_to_hard_limit = true
5693
5694#disable_core_files = false
5695
5696# number of file limit
5697# Below should be consistent with start-dai.sh
5698#limit_nofile = 131071
5699
5700# number of threads limit
5701# Below should be consistent with start-dai.sh
5702#limit_nproc = 16384
5703
5704# '
5705# Whether to compute training, validation, and test correlation matrix (table and heatmap pdf) and save to disk
5706# alpha: WARNING: currently single threaded and quadratically slow for many columns
5707#compute_correlation = false
5708
5709# Whether to dump to disk a correlation heatmap
5710#produce_correlation_heatmap = false
5711
5712# Value to report high correlation between original features
5713#high_correlation_value_to_report = 0.95
5714
5715# If True, experiments aborted by server restart will automatically restart and continue upon user login
5716#restart_experiments_after_shutdown = false
5717
5718# When environment variable is set to toml value, consider that an override of any toml value. Experiment's remember toml values for scoring, and this treats any environment set as equivalent to putting OVERRIDE_ in front of the environment key.
5719#any_env_overrides = false
5720
5721# Include byte order mark (BOM) when writing CSV files. Required to support UTF-8 encoding in Excel.
5722#datatable_bom_csv = false
5723
5724# Whether to enable debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.
5725#debug_print = false
5726
5727# Level (0-4) for debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files. 1-2 is normal, 4 would lead to highly excessive debug and is not recommended in production.
5728#debug_print_level = 0
5729
5730#return_quickly_autodl_testing = false
5731
5732#return_quickly_autodl_testing2 = false
5733
5734#return_before_final_model = false
5735
5736# Whether to check if config.toml keys are valid and fail if not valid
5737#check_invalid_config_toml_keys = true
5738
5739#predict_safe_trials = 2
5740
5741#fit_safe_trials = 2
5742
5743#allow_no_pid_host = true
5744
5745#enable_autodl_system_insights = true
5746
5747#enable_deleting_autodl_system_insights_finished_experiments = true
5748
5749#main_logger_with_experiment_ids = true
5750
5751# Reduce memory usage during final ensemble feature engineering (1 uses most memory, larger values use less memory)
5752#final_munging_memory_reduction_factor = 2
5753
5754# How much more memory a typical transformer needs than the input data.
5755# Can be increased if, e.g., final model munging uses too much memory due to parallel operations.
5756#munging_memory_overhead_factor = 5
5757
5758#per_transformer_segfault_protection_ga = false
5759
5760#per_transformer_segfault_protection_final = false
5761
5762# How often to check resources (disk, memory, cpu) to see if need to stall submission.
5763#submit_resource_wait_period = 10
5764
5765# Stall submission of subprocesses if system CPU usage is higher than this threshold in percent (set to 100 to disable). A reasonable number is 90.0 if activated
5766#stall_subprocess_submission_cpu_threshold_pct = 100
5767
5768# Restrict/Stall submission of subprocesses if DAI fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking. A reasonable number is 90.0 if activated
5769#stall_subprocess_submission_dai_fork_threshold_pct = -1.0
5770
5771# Restrict/Stall submission of subprocesses if experiment fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking). A reasonable number is 90.0 if activated. For small data leads to overhead of about 0.1s per task submitted due to checks, so for scoring can slow things down for tests.
5772#stall_subprocess_submission_experiment_fork_threshold_pct = -1.0
5773
5774# Whether to restrict pool workers even if not used, by reducing number of pool workers available. Good if really huge number of experiments, but otherwise, best to have all pool workers ready and only stall submission of tasks so can be dynamic to multi-experiment environment
5775#restrict_initpool_by_memory = true
5776
5777# Whether to terminate experiments if the system memory available falls below memory_limit_gb_terminate
5778#terminate_experiment_if_memory_low = false
5779
5780# Memory in GB beyond which will terminate experiment if terminate_experiment_if_memory_low=true.
5781#memory_limit_gb_terminate = 5
5782
5783# A fraction that with valid values between 0.1 and 1.0 that determines the disk usage quota for a user, this quota will be checked during datasets import or experiment runs.
5784#users_disk_usage_quota = 1.0
5785
5786# Path to use for scoring directory path relative to run path
5787#scoring_data_directory = "tmp"
5788
5789#num_models_for_resume_graph = 1000
5790
5791# Internal helper to allow memory of if changed exclusive mode
5792#last_exclusive_mode = ""
5793
5794#mojo_acceptance_test_errors_fatal = true
5795
5796#mojo_acceptance_test_errors_shap_fatal = true
5797
5798#mojo_acceptance_test_orig_shap = true
5799
5800# Which MOJO runtimes should be tested as part of the mini acceptance tests
5801#mojo_acceptance_test_mojo_types = "['C++', 'Java']"
5802
5803# Create MOJO for feature engineering pipeline only (no predictions)
5804#make_mojo_scoring_pipeline_for_features_only = false
5805
5806# Replaces target encoding features by their input columns. Instead of CVTE_Age:Income:Zip, this will create Age:Income:Zip. Only when make_mojo_scoring_pipeline_for_features_only is enabled.
5807#mojo_replace_target_encoding_with_grouped_input_cols = false
5808
5809# Use pipeline to generate transformed features, when making predictions, bypassing the model that usually converts transformed features into predictions.
5810#predictions_as_transform_only = false
5811
5812# If set to true, will make sure only current instance can access its database
5813#enable_single_instance_db_access = true
5814
5815# DCGM daemon address, DCGM has to be in standalone mode in remote/local host.
5816#dcgm_daemon_address = "127.0.0.1"
5817
5818# Deprecated - maps to enable_pytorch_nlp_transformer and enable_pytorch_nlp_model in 1.10.2+
5819#enable_pytorch_nlp = "auto"
5820
5821# How long to wait per GPU for tensorflow/torch to run during system checks.
5822#check_timeout_per_gpu = 20
5823
5824# Whether to fail start-up if cannot successfully run GPU checks
5825#gpu_exit_if_fails = true
5826
5827#how_started = ""
5828
5829#wizard_state = ""
5830
5831# Whether to enable pushing telemetry events to a configured telemetry receiver in 'telemetry_plugins_dir'.
5832#enable_telemetry = false
5833
5834# Directory to scan for telemetry recipes.
5835#telemetry_plugins_dir = "./telemetry_plugins"
5836
5837# Whether to enable TLS to communicate to H2O.ai Telemetry Service.
5838#h2o_telemetry_tls_enabled = false
5839
5840# Timeout value when communicating to H2O.ai Telemetry Service.
5841#h2o_telemetry_rpc_deadline_seconds = 60
5842
5843# H2O.ai Telemetry Service address in H2O.ai Cloud.
5844#h2o_telemetry_address = ""
5845
5846# H2O.ai Telemetry Service access token file location.
5847#h2o_telemetry_service_token_location = ""
5848
5849# TLS CA path when communicating to H2O.ai Telemetry Service.
5850#h2o_telemetry_tls_ca_path = ""
5851
5852# TLS certificate path when communicating to H2O.ai Telemetry Service.
5853#h2o_telemetry_tls_cert_path = ""
5854
5855# TLS key path when communicating to H2O.ai Telemetry Service.
5856#h2o_telemetry_tls_key_path = ""
5857
5858# Whether to enable pushing audit events to a configured Audit Trail receiver in 'audit_trail_plugins_dir'.
5859#enable_audit_trail = false
5860
5861# Whether to return all stack trace error log to audit trail API
5862#enable_debug_error_audit_trail = false
5863
5864# Timeout value when communicating to H2O.ai Audit Trail Service.
5865#h2o_audit_trail_rpc_deadline_seconds = 60
5866
5867# H2O.ai Audit Trail Service address in H2O.ai Cloud.
5868#h2o_audit_trail_address = ""
5869
5870# Path to the Kubernetes service account token for Audit Trail and AuthZ.
5871#h2o_k8s_service_token_location = "/var/run/secrets/kubernetes.io/serviceaccount/token"
5872
5873# Enable H2O.ai AuthZ.
5874#enable_h2o_authz = false
5875
5876# The endpoint (host:port) of the H2O.ai AuthZ Policy Server in H2O.ai Cloud.
5877#h2o_authz_policy_server_endpoint = ""
5878
5879# The endpoint (host:port) of the H2O.ai Workspace server in H2O.ai Cloud.
5880#h2o_workspace_server_endpoint = ""
5881
5882# H2O.ai HAIC engine name for driverless instance that contains the
5883# workspace ID. Example:
5884# //engine-manager/workspaces/<workspace name>/daiEngines/<engine name>
5885#
5886#haic_engine_name = ""
5887
5888# Whether to disable downloading logs via both API and UI. Note: this settings does not apply to admin user.
5889#disable_download_logs = false
5890
5891# Enable time series lag-based recipe with lag transformers. If disabled, the same train-test gap and periods are used, but no lag transformers are enabled. If disabled, the set of feature transformations is quite limited without lag transformers, so consider setting enable_time_unaware_transformers to true in order to treat the problem as more like an IID type problem.
5892#time_series_recipe = true
5893
5894# Whether causal splits are used when time_series_recipe is false orwhether to use same train-gap-test splits when lag transformers are disabled (default behavior).For train-test gap, period, etc. to be used when lag-based recipe is disabled, this must be false.
5895#time_series_causal_split_recipe = false
5896
5897# Whether to use lag transformers when using causal-split for validation
5898# (as occurs when not using time-based lag recipe).
5899# If no time groups columns, lag transformers will still use time-column as sole time group column.
5900#
5901#use_lags_if_causal_recipe = false
5902
5903# 'diverse': explore a diverse set of models built using various expert settings. Note that it's possible to rerun another such diverse leaderboard on top of the best-performing model(s), which will effectively help you compose these expert settings.
5904# 'sliding_window': If the forecast horizon is N periods, create a separate model for each of the (gap, horizon) pairs of (0,n), (n,n), (2*n,n), ..., (2*N-1, n) in units of time periods.
5905# The number of periods to predict per model n is controlled by the expert setting 'time_series_leaderboard_periods_per_model', which defaults to 1.
5906#time_series_leaderboard_mode = "diverse"
5907
5908# Fine-control to limit the number of models built in the 'sliding_window' mode. Larger values lead to fewer models.
5909#time_series_leaderboard_periods_per_model = 1
5910
5911# Whether to create larger validation splits that are not bound to the length of the forecast horizon.
5912#time_series_merge_splits = true
5913
5914# Maximum ratio of training data samples used for validation across splits when larger validation splits are created.
5915#merge_splits_max_valid_ratio = -1.0
5916
5917# Whether to keep a fixed-size train timespan across time-based splits.
5918# That leads to roughly the same amount of train samples in every split.
5919#
5920#fixed_size_train_timespan = false
5921
5922# Provide date or datetime timestamps (in same format as the time column) for custom training and validation splits like this: "tr_start1, tr_end1, va_start1, va_end1, ..., tr_startN, tr_endN, va_startN, va_endN"
5923#time_series_validation_fold_split_datetime_boundaries = ""
5924
5925# Set fixed number of time-based splits for internal model validation (actual number of splits allowed can be less and is determined at experiment run-time).
5926#time_series_validation_splits = -1
5927
5928# Maximum overlap between two time-based splits. Higher values increase the amount of possible splits.
5929#time_series_splits_max_overlap = 0.5
5930
5931# Earliest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 or 201004022312 can be converted to a valid date/datetime, but 1000 or 100004 or 10000402 or 10004022313 can not, and neither can 201000 or 20100500 etc.
5932#min_ymd_timestamp = 19000101
5933
5934# Latest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 can be converted to a valid date/datetime, but 3000 or 300004 or 30000402 or 30004022313 can not, and neither can 201000 or 20100500 etc.
5935#max_ymd_timestamp = 21000101
5936
5937# maximum number of data samples (randomly selected rows) for date/datetime format detection
5938#max_rows_datetime_format_detection = 100000
5939
5940# Manually disables certain datetime formats during data ingest and experiments.
5941# For example, ['%y'] will avoid parsing columns that contain '00', '01', '02' string values as a date column.
5942#
5943#disallowed_datetime_formats = "['%y']"
5944
5945# Whether to use datetime cache
5946#use_datetime_cache = true
5947
5948# Minimum amount of rows required to utilize datetime cache
5949#datetime_cache_min_rows = 10000
5950
5951# Automatically generate is-holiday features from date columns
5952#holiday_features = true
5953
5954#holiday_country = ""
5955
5956# List of countries for which to look up holiday calendar and to generate is-Holiday features for
5957#holiday_countries = "['UnitedStates', 'UnitedKingdom', 'EuropeanCentralBank', 'Germany', 'Mexico', 'Japan']"
5958
5959# Max. sample size for automatic determination of time series train/valid split properties, only if time column is selected
5960#max_time_series_properties_sample_size = 250000
5961
5962# Maximum number of lag sizes to use for lags-based time-series experiments. are sampled from if sample_lag_sizes==True, else all are taken (-1 == automatic)
5963#max_lag_sizes = 30
5964
5965# Minimum required autocorrelation threshold for a lag to be considered for feature engineering
5966#min_lag_autocorrelation = 0.1
5967
5968# How many samples of lag sizes to use for a single time group (single time series signal)
5969#max_signal_lag_sizes = 100
5970
5971# If enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size, esp. when many unavailable columns for prediction.
5972#sample_lag_sizes = false
5973
5974# If sample_lag_sizes is enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size. Defaults to -1 (auto), in which case it's the same as the feature interaction depth controlled by max_feature_interaction_depth.
5975#max_sampled_lag_sizes = -1
5976
5977# Override lags to be used
5978# e.g. [7, 14, 21] # this exact list
5979# e.g. 21 # produce from 1 to 21
5980# e.g. 21:3 produce from 1 to 21 in step of 3
5981# e.g. 5-21 produce from 5 to 21
5982# e.g. 5-21:3 produce from 5 to 21 in step of 3
5983#
5984#override_lag_sizes = "[]"
5985
5986# Override lags to be used for features that are not known ahead of time
5987# e.g. [7, 14, 21] # this exact list
5988# e.g. 21 # produce from 1 to 21
5989# e.g. 21:3 produce from 1 to 21 in step of 3
5990# e.g. 5-21 produce from 5 to 21
5991# e.g. 5-21:3 produce from 5 to 21 in step of 3
5992#
5993#override_ufapt_lag_sizes = "[]"
5994
5995# Override lags to be used for features that are known ahead of time
5996# e.g. [7, 14, 21] # this exact list
5997# e.g. 21 # produce from 1 to 21
5998# e.g. 21:3 produce from 1 to 21 in step of 3
5999# e.g. 5-21 produce from 5 to 21
6000# e.g. 5-21:3 produce from 5 to 21 in step of 3
6001#
6002#override_non_ufapt_lag_sizes = "[]"
6003
6004# Smallest considered lag size
6005#min_lag_size = -1
6006
6007# Whether to enable feature engineering based on selected time column, e.g. Date~weekday.
6008#allow_time_column_as_feature = true
6009
6010# Whether to enable integer time column to be used as a numeric feature.
6011# If using time series recipe, using time column (numeric time stamps) as input features can lead to model that
6012# memorizes the actual time stamps instead of features that generalize to the future.
6013#
6014#allow_time_column_as_numeric_feature = false
6015
6016# Allowed date or date-time transformations.
6017# Date transformers include: year, quarter, month, week, weekday, day, dayofyear, num.
6018# Date transformers also include: hour, minute, second.
6019# Features in DAI will show up as get_ + transformation name.
6020# E.g. num is a direct numeric value representing the floating point value of time,
6021# which can lead to over-fitting if used on IID problems. So this is turned off by default.
6022#datetime_funcs = "['year', 'quarter', 'month', 'week', 'weekday', 'day', 'dayofyear', 'hour', 'minute', 'second']"
6023
6024# Whether to filter out date and date-time transformations that lead to unseen values in the future.
6025#
6026#filter_datetime_funcs = true
6027
6028# Whether to consider time groups columns (tgc) as standalone features.
6029# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
6030# Note that tgc_allow_target_encoding independently controls if time column groups are target encoded.
6031# Use allowed_coltypes_for_tgc_as_features for control per feature type.
6032#
6033#allow_tgc_as_features = true
6034
6035# Which time groups columns (tgc) feature types to consider as standalone features,
6036# if the corresponding flag "Consider time groups columns as standalone features" is set to true.
6037# E.g. all column types would be ["numeric", "categorical", "ohe_categorical", "datetime", "date", "text"]
6038# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
6039# Note that if lag-based time series recipe is disabled, then all tgc are allowed features.
6040#
6041#allowed_coltypes_for_tgc_as_features = "['numeric', 'categorical', 'ohe_categorical', 'datetime', 'date', 'text']"
6042
6043# Whether various transformers (clustering, truncated SVD) are enabled,
6044# that otherwise would be disabled for time series due to
6045# potential to overfit by leaking across time within the fit of each fold.
6046#
6047#enable_time_unaware_transformers = "auto"
6048
6049# Whether to group by all time groups columns for creating lag features, instead of sampling from them
6050#tgc_only_use_all_groups = true
6051
6052# Whether to allow target encoding of time groups. This can be useful if there are many groups.
6053# Note that allow_tgc_as_features independently controls if tgc are treated as normal features.
6054# 'auto': Choose CV by default.
6055# 'CV': Enable out-of-fold and CV-in-CV (if enabled) encoding
6056# 'simple': Simple memorized targets per group.
6057# 'off': Disable.
6058# Only relevant for time series experiments that have at least one time column group apart from the time column.
6059#tgc_allow_target_encoding = "auto"
6060
6061# if allow_tgc_as_features is true or tgc_allow_target_encoding is true, whether to try both possibilities to see which does better during tuning. Safer than forcing one way or the other.
6062#tgc_allow_features_and_target_encoding_auto_tune = true
6063
6064# Enable creation of holdout predictions on training data
6065# using moving windows (useful for MLI, but can be slow)
6066#time_series_holdout_preds = true
6067
6068# Max number of splits used for creating final time-series model's holdout/backtesting predictions. With the default value '-1' the same amount of splits as during model validation will be used. Use 'time_series_validation_splits' to control amount of time-based splits used for model validation.
6069#time_series_max_holdout_splits = -1
6070
6071#single_model_vs_cv_score_reldiff = 0.05
6072
6073#single_model_vs_cv_score_reldiff2 = 0.0
6074
6075# Whether to blend ensembles in link space, so that can apply inverse link function to get predictions after blending. This allows to get Shapley values to sum up to final predictions, after applying inverse link function: preds = inverse_link( (blend(base learner predictions in link space ))) = inverse_link(sum(blend(base learner shapley values in link space))) = inverse_link(sum( ensemble shapley values in link space ))For binary classification, this is only supported if inverse_link = logistic = 1/(1+exp(-x))For multiclass classification, this is only supported if inverse_link = softmax = exp(x)/sum(exp(x))For regression, this behavior happens naturally if all base learners use the identity link function, otherwise not possible
6076#blend_in_link_space = true
6077
6078# Whether to speed up time-series holdout predictions for back-testing on training data (used for MLI and metrics calculation). Can be slightly less accurate.
6079#mli_ts_fast_approx = false
6080
6081# Whether to speed up Shapley values for time-series holdout predictions for back-testing on training data (used for MLI). Can be slightly less accurate.
6082#mli_ts_fast_approx_contribs = true
6083
6084# Enable creation of Shapley values for holdout predictions on training data
6085# using moving windows (useful for MLI, but can be slow), at the time of the experiment. If disabled, MLI will
6086# generate Shapley values on demand.
6087#mli_ts_holdout_contribs = true
6088
6089# Values of 5 or more can improve generalization by more aggressive dropping of least important features. Set to 1 to disable.
6090#time_series_min_interpretability = 5
6091
6092# Dropout mode for lag features in order to achieve an equal n.a.-ratio between train and validation/test. The independent mode performs a simple feature-wise dropout, whereas the dependent one takes lag-size dependencies per sample/row into account.
6093#lags_dropout = "dependent"
6094
6095# Normalized probability of choosing to lag non-targets relative to targets (-1.0 = auto)
6096#prob_lag_non_targets = -1.0
6097
6098# Method to create rolling test set predictions, if the forecast horizon is shorter than the time span of the test set. One can choose between test time augmentation (TTA) and a successive refitting of the final pipeline.
6099#rolling_test_method = "tta"
6100
6101#rolling_test_method_max_splits = 1000
6102
6103# Apply TTA in one pass instead of using rolling windows for internal validation split predictions. Note: Setting this to 'False' leads to significantly longer runtimes.
6104#fast_tta_internal = true
6105
6106# Apply TTA in one pass instead of using rolling windows for test set predictions. This only applies if the forecast horizon is shorter than the time span of the test set. Note: Setting this to 'False' leads to significantly longer runtimes.
6107#fast_tta_test = true
6108
6109# Probability for new Lags/EWMA gene to use default lags (determined by frequency/gap/horizon, independent of data) (-1.0 = auto)
6110#prob_default_lags = -1.0
6111
6112# Unnormalized probability of choosing other lag time-series transformers based on interactions (-1.0 = auto)
6113#prob_lagsinteraction = -1.0
6114
6115# Unnormalized probability of choosing other lag time-series transformers based on aggregations (-1.0 = auto)
6116#prob_lagsaggregates = -1.0
6117
6118# Time series centering or detrending transformation. The free parameter(s) of the trend model are fitted and the trend is removed from the target signal, and the pipeline is fitted on the residuals. Predictions are made by adding back the trend. Note: Can be cascaded with 'Time series lag-based target transformation', but is mutually exclusive with regular target transformations. The robust centering or linear detrending variants use RANSAC to achieve a higher tolerance w.r.t. outliers. The Epidemic target transformer uses the SEIR model: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model
6119#ts_target_trafo = "none"
6120
6121# Dictionary to control Epidemic SEIRD model for de-trending of target per time series group.
6122# Note: The target column must correspond to I(t), the infected cases as a function of time.
6123# For each training split and time series group, the SEIRD model is fitted to the target signal (by optimizing
6124# the free parameters shown below for each time series group).
6125# Then, the SEIRD model's value is subtracted from the training response, and the residuals are passed to
6126# the feature engineering and modeling pipeline. For predictions, the SEIRD model's value is added to the residual
6127# predictions from the pipeline, for each time series group.
6128# Note: Careful selection of the bounds for the free parameters N, beta, gamma, delta, alpha, rho, lockdown,
6129# beta_decay, beta_decay_rate is extremely important for good results.
6130# - S(t) : susceptible/healthy/not immune
6131# - E(t) : exposed/not yet infectious
6132# - I(t) : infectious/active <= target column
6133# - R(t) : recovered/immune
6134# - D(t) : deceased
6135# ### Free parameters:
6136# - N : total population, N=S+E+I+R+D
6137# - beta : rate of exposure (S -> E)
6138# - gamma : rate of recovering (I -> R)
6139# - delta : incubation period
6140# - alpha : fatality rate
6141# - rho : rate at which people die
6142# - lockdown : day of lockdown (-1 => no lockdown)
6143# - beta_decay : beta decay due to lockdown
6144# - beta_decay_rate : speed of beta decay
6145# ### Dynamics:
6146# if lockdown >= 0:
6147# beta_min = beta * (1 - beta_decay)
6148# beta = (beta - beta_min) / (1 + np.exp(-beta_decay_rate * (-t + lockdown))) + beta_min
6149# dSdt = -beta * S * I / N
6150# dEdt = beta * S * I / N - delta * E
6151# dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
6152# dRdt = (1 - alpha) * gamma * I
6153# dDdt = alpha * rho * I
6154# Provide lower/upper bounds for each parameter you want to control the bounds for. Valid parameters are:
6155# N_min, N_max, beta_min, beta_max, gamma_min, gamma_max, delta_min, delta_max, alpha_min, alpha_max,
6156# rho_min, rho_max, lockdown_min, lockdown_max, beta_decay_min, beta_decay_max,
6157# beta_decay_rate_min, beta_decay_rate_max. You can change any subset of parameters, e.g.,
6158# ts_target_trafo_epidemic_params_dict="{'N_min': 1000, 'beta_max': 0.2}"
6159# To get SEIR model (in cases where death rates are very low, can speed up calculations significantly):
6160# set alpha_min=alpha_max=rho_min=rho_max=beta_decay_rate_min=beta_decay_rate_max=0, lockdown_min=lockdown_max=-1.
6161#
6162#ts_target_trafo_epidemic_params_dict = "{}"
6163
6164#ts_target_trafo_epidemic_target = "I"
6165
6166# Time series lag-based target transformation. One can choose between difference and ratio of the current and a lagged target. The corresponding lag size can be set via 'Target transformation lag size'. Note: Can be cascaded with 'Time series target transformation', but is mutually exclusive with regular target transformations.
6167#ts_lag_target_trafo = "none"
6168
6169# Lag size used for time series target transformation. See setting 'Time series lag-based target transformation'. -1 => smallest valid value = prediction periods + gap (automatically adjusted by DAI if too small).
6170#ts_target_trafo_lag_size = -1
6171
6172# Maximum amount of columns send from UI to backend in order to auto-detect TGC
6173#tgc_via_ui_max_ncols = 10
6174
6175# Maximum frequency of duplicated timestamps for TGC detection
6176#tgc_dup_tolerance = 0.01
6177
6178# Timeout in seconds for time-series properties detection in UI.
6179#timeseries_split_suggestion_timeout = 30.0
6180
6181# Weight TS models scores as split number to this power.
6182# E.g. Use 1.0 to weight split closest to horizon by a factor
6183# that is number of splits larger than oldest split.
6184# Applies to tuning models and final back-testing models.
6185# If 0.0 (default) is used, median function is used, else mean is used.
6186#
6187#timeseries_recency_weight_power = 0.0
6188
6189# Whether to force date column format conversion during prediction. Date format
6190# is inferred during training and assumes prediction data has the same format.
6191# Enable this setting would force DAI to do the format conversion silently.
6192# For instance, if expected format is '%m/%d/%Y' but prediction comes with '2000-01-01', then
6193# conversion will be done by converting the date representation into 'yyyy-mm-dd' on ad hoc fashion.
6194# Note: Even force conversion, this normally wont affect embedding information of the date column.
6195#
6196#force_on_convert_incorrect_date_format = false
6197
6198# Every *.toml file is read from this directory and process the same way as main config file.
6199#user_config_directory = ""
6200
6201# IP address for the procsy process.
6202#procsy_ip = "127.0.0.1"
6203
6204# Port for the procsy process.
6205#procsy_port = 12347
6206
6207# Request timeout (in seconds) for the procsy process.
6208#procsy_timeout = 3600
6209
6210# IP address for use by MLI.
6211#h2o_ip = "127.0.0.1"
6212
6213# Port of H2O instance for use by MLI. Each H2O node has an internal port (web port+1, so by default port 12349) for internal node-to-node communication
6214#h2o_port = 12348
6215
6216# IP address and port for Driverless AI HTTP server.
6217#ip = "127.0.0.1"
6218
6219# IP address and port for Driverless AI HTTP server.
6220#port = 12345
6221
6222# A list of two integers indicating the port range to search over, and dynamically find an open port to bind to (e.g., [11111,20000]).
6223#port_range = "[]"
6224
6225# Strict version check for DAI
6226#strict_version_check = true
6227
6228# File upload limit (default 100GB)
6229#max_file_upload_size = 104857600000
6230
6231# Data directory. All application data and files related datasets and
6232# experiments are stored in this directory.
6233#data_directory = "./tmp"
6234
6235# Sets a custom path for the master.db. Use this to store the database outside the data directory,
6236# which can improve performance if the data directory is on a slow drive.
6237#db_path = ""
6238
6239# Datasets directory. If set, it will denote the location from which all
6240# datasets will be read from and written into, typically this location shall be configured to be
6241# on an external file system to allow for a more granular control to just the datasets volume.
6242# If empty then will default to data_directory.
6243#datasets_directory = ""
6244
6245# Path to the directory where the logs of HDFS, Hive, JDBC, and KDB+ data connectors will be saved.
6246#data_connectors_logs_directory = "./tmp"
6247
6248# Subdirectory within data_directory to store server logs.
6249#server_logs_sub_directory = "server_logs"
6250
6251# Subdirectory within data_directory to store pid files for controlling kill/stop of DAI servers.
6252#pid_sub_directory = "pids"
6253
6254# Path to the directory which will be use to save MapR tickets when MapR multi-user mode is enabled.
6255# This is applicable only when enable_mapr_multi_user_mode is set to true.
6256#
6257#mapr_tickets_directory = "./tmp/mapr-tickets"
6258
6259# MapR tickets duration in minutes, if set to -1, it will use the default value
6260# (not specified in maprlogin command), otherwise will be the specified configuration
6261# value but no less than one day.
6262#
6263#mapr_tickets_duration_minutes = -1
6264
6265# Whether at server start to delete all temporary uploaded files, left over from failed uploads.
6266#
6267#remove_uploads_temp_files_server_start = true
6268
6269# Whether to run through entire data directory and remove all temporary files.
6270# Can lead to slow start-up time if have large number (much greater than 100) of experiments.
6271#
6272#remove_temp_files_server_start = false
6273
6274# Whether to delete temporary files after experiment is aborted/cancelled.
6275#
6276#remove_temp_files_aborted_experiments = true
6277
6278# Whether to opt in to usage statistics and bug reporting
6279#usage_stats_opt_in = true
6280
6281# Configurations for a HDFS data source
6282# Path of hdfs coresite.xml
6283# core_site_xml_path is deprecated, please use hdfs_config_path
6284#core_site_xml_path = ""
6285
6286# (Required) HDFS config folder path. Can contain multiple config files.
6287#hdfs_config_path = ""
6288
6289# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6290# key_tab_path is deprecated, please use hdfs_keytab_path
6291#
6292#key_tab_path = ""
6293
6294# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6295#
6296#hdfs_keytab_path = ""
6297
6298# Whether to delete preview cache on server exit
6299#preview_cache_upon_server_exit = true
6300
6301# When this setting is enabled, any user can see all tasks running in the system, including their owner and an identification key. If this setting is turned off, user can see only their own tasks.
6302#all_tasks_visible_to_users = true
6303
6304# When enabled, server exposes Health API at /apis/health/v1, which provides system overview and utilization statistics
6305#enable_health_api = true
6306
6307#notification_url = "https://s3.amazonaws.com/ai.h2o.notifications/dai_notifications_prod.json"
6308
6309# When enabled, the notification scripts will inherit
6310# the parent's process (DriverlessAI) environment variables.
6311#
6312#listeners_inherit_env_variables = false
6313
6314# Notification scripts
6315# - the variable points to a location of script which is executed at given event in experiment lifecycle
6316# - the script should have executable flag enabled
6317# - use of absolute path is suggested
6318# The on experiment start notification script location
6319#listeners_experiment_start = ""
6320
6321# The on experiment finished notification script location
6322#listeners_experiment_done = ""
6323
6324# The on experiment import notification script location
6325#listeners_experiment_import_done = ""
6326
6327# Notification script triggered when building of MOJO pipeline for experiment is
6328# finished. The value should be an absolute path to executable script.
6329#
6330#listeners_mojo_done = ""
6331
6332# Notification script triggered when rendering of AutoDoc for experiment is
6333# finished. The value should be an absolute path to executable script.
6334#
6335#listeners_autodoc_done = ""
6336
6337# Notification script triggered when building of python scoring pipeline
6338# for experiment is finished.
6339# The value should be an absolute path to executable script.
6340#
6341#listeners_scoring_pipeline_done = ""
6342
6343# Notification script triggered when experiment and all its artifacts selected
6344# at the beginning of experiment are finished building.
6345# The value should be an absolute path to executable script.
6346#
6347#listeners_experiment_artifacts_done = ""
6348
6349# Whether to run quick performance benchmark at start of application
6350#enable_quick_benchmark = true
6351
6352# Whether to run extended performance benchmark at start of application
6353#enable_extended_benchmark = false
6354
6355# Scaling factor for number of rows for extended performance benchmark. For rigorous performance benchmarking,
6356# values of 1 or larger are recommended.
6357#extended_benchmark_scale_num_rows = 0.1
6358
6359# Number of columns for extended performance benchmark.
6360#extended_benchmark_num_cols = 20
6361
6362# Seconds to allow for testing memory bandwidth by generating numpy frames
6363#benchmark_memory_timeout = 2
6364
6365# Maximum portion of vm total to use for numpy memory benchmark
6366#benchmark_memory_vm_fraction = 0.25
6367
6368# Maximum number of columns to use for numpy memory benchmark
6369#benchmark_memory_max_cols = 1500
6370
6371# Whether to run quick startup checks at start of application
6372#enable_startup_checks = true
6373
6374# Application ID override, which should uniquely identify the instance
6375#application_id = ""
6376
6377# After how many seconds to abort MLI recipe execution plan or recipe compatibility checks.
6378# Blocks main server from all activities, so long timeout is not desired, esp. in case of hanging processes,
6379# while a short timeout can too often lead to abortions on busy system.
6380#
6381#main_server_fork_timeout = 10.0
6382
6383# After how many days the audit log records are removed.
6384# Set equal to 0 to disable removal of old records.
6385#
6386#audit_log_retention_period = 5
6387
6388# Time to wait after performing a cleanup of temporary files for in-browser dataset upload.
6389#
6390#dataset_tmp_upload_file_retention_time_min = 5
6391