Running an experiment
This page describes in detail how to configure and run a Katib experiment. The experiment can perform hyperparameter tuning or a neural architecture search (NAS) (Alpha), depending on the configuration settings.
For an overview of the concepts involved, read the introduction to Katib.
Packaging your training code in a container image
Katib and Kubeflow are Kubernetes-based systems. To use Katib, you must package your training code in a Docker container image and make the image available in a registry. See the Docker documentation and the Kubernetes documentation.
Configuring the experiment
To create a hyperparameter tuning or NAS experiment in Katib, you define the experiment in a YAML configuration file. The YAML file defines the range of potential values (the search space) for the paramaters that you want to optimize, the objective metric to use when determining optimal values, the search algorithm to use during optimization, and other configurations.
See the YAML file for the random algorithm example.
The list below describes the fields in the YAML file for an experiment. The Katib UI offers the corresponding fields. You can choose to configure and run the experiment from the UI or from the command line.
Configuration spec
These are the fields in the experiment configuration spec:
-
parameters: The range of the hyperparameters or other parameters that you want to tune for your ML model. The parameters define the search space, also known as the feasible set or the solution space. In this section of the spec, you define the name and the distribution (discrete or continuous) of every hyperparameter that you need to search. For example, you may provide a minimum and maximum value or a list of allowed values for each hyperparameter. Katib generates hyperparameter combinations in the range based on the hyperparameter tuning algorithm that you specify. See the
ParameterSpec
type. -
objective: The metric that you want to optimize. The objective metric is also called the target variable. A common metric is the model’s accuracy in the validation pass of the training job (validation-accuracy). You also specify whether you want Katib to maximize or minimize the metric. Katib uses the
objectiveMetricName
andadditionalMetricNames
to monitor how the hyperparameters work with the model. Katib records the value of the bestobjectiveMetricName
metric (maximized or minimized based ontype
) and the corresponding hyperparameter set inExperiment.status
. If theobjectiveMetricName
metric for a set of hyperparameters reaches thegoal
, Katib stops trying more hyperparameter combinations. See the ObjectiveSpec type. -
algorithm: The search algorithm that you want Katib to use to find the best hyperparameters or neural architecture configuration. Examples include random search, grid search, Bayesian optimization, and more. See the search algorithm details below.
-
trialTemplate: The template that defines the trial. You must package your ML training code into a Docker image, as described above. You must configure the model’s hyperparameters either as command-line arguments or as environment variables, so that Katib can automatically set the values in each trial.
You can use one of the following job types to train your model:
- Kubernetes Job (does not support distributed execution).
- Kubeflow TFJob (supports distributed execution).
- Kubeflow PyTorchJob (supports distributed execution).
See the TrialTemplate type. The template uses the Go template format.
You can define the job in raw string format or you can use a ConfigMap.
-
parallelTrialCount: The maximum number of hyperparameter sets that Katib should train in parallel.
-
maxTrialCount: The maximum number of trials to run. This is equivalent to the number of hyperparameter sets that Katib should generate to test the model.
-
maxFailedTrialCount: The maximum number of failed trials before Katib should stop the experiment. This is equivalent to the number of failed hyperparameter sets that Katib should test. If the number of failed trials exceeds
maxFailedTrialCount
, Katib stops the experiment with a status ofFailed
. -
metricsCollectorSpec: A specification of how to collect the metrics from each trial, such as the accuracy and loss metrics. See the details of the metrics collector below.
-
nasConfig: The configuration for a neural architecture search (NAS). Note: NAS is currently in Alpha with limited support. You can specify the configurations of the neural network design that you want to optimize, including the number of layers in the network, the types of operations, and more. See the NasConfig type. As an example, see the YAML file for the nasjob-example-RL. The example aims to show all the possible operations. Due to the large search space, the example is not likely to generate a good result.
Background information about Katib’s Experiment
type: In Kubernetes
terminology, Katib’s
Experiment
type is a custom resource
(CR).
The YAML file that you create for your experiment is the CR specification.
Search algorithms in detail
Katib currently supports several search algorithms. See the AlgorithmSpec type.
Here’s a list of the search algorithms available in Katib. The links lead to descriptions on this page:
- Grid search
- Random search
- Bayesian optimization
- HYPERBAND
- Hyperopt TPE
- NAS based on reinforcement learning
More algorithms are under development. You can add an algorithm to Katib yourself. See the guide to adding a new algorithm and the developer guide.
Grid search
The algorithm name in Katib is grid
.
Grid sampling is useful when all variables are discrete (as opposed to continuous) and the number of possibilities is low. A grid search performs an exhaustive combinatorial search over all possibilities, making the search process extremely long even for medium sized problems.
Katib uses the Chocolate optimization framework for its grid search.
Random search
The algorithm name in Katib is random
.
Random sampling is an alternative to grid search, useful when the number of discrete variables to optimize is large and the time required for each evaluation is logn. When all parameters are discrete, random search performs sampling without replacement. Random search is therefore the best algorithm to use when combinatorial exploration is not possible. If the number of continuous variables is high, you should use quasi random sampling instead.
Katib uses the hyperopt optimization framework for its random search.
Katib supports the following algorithm settings:
Bayesian optimization
The algorithm name in Katib is skopt-bayesian-optimization
.
The Bayesian optimization method uses gaussian process regression to model the search space. This technique calculates an estimate of the loss function and the uncertainty of that estimate at every point in the search space. The method is suitable when the number of dimensions in the search space is low. Since the method models both the expected loss and the uncertainty, the search algorithm converges in a few steps, making it a good choice when the time to complete the evaluation of a parameter configuration is long.
Katib uses the
Scikit-Optimize library
for its Bayesian search. Scikit-Optimize is also known as skopt
.
Katib supports the following algorithm settings:
HYPERBAND
The algorithm name in Katib is hyperband
.
Katib supports the HYPERBAND optimization framework. Instead of using Bayesian optimization to select configurations, HYPERBAND focuses on early stopping as a strategy for optimizing resource allocation and thus for maximixing the number of configurations that it can evaluate. HYPERBAND also focuses on the speed of the search.
Hyperopt TPE
The algorithm name in Katib is tpe
.
Katib uses the Tree of Parzen Estimators (TPE) algorithm in hyperopt. This method provides a forward and reverse gradient-based search.
NAS using reinforcement learning
Alpha version
The algorithm name in Katib is nasrl
.
For more information, see:
- Information in the Katib repository on NAS with reinforcement learning.
- The description of the
nasConfig
field in the configuration file earlier on this page.
Metrics collector
In the metricsCollectorSpec
section of the YAML configuration file, you can
define how Katib should collect the metrics from each trial, such as the
accuracy and loss metrics.
Your training code can record the metrics into stdout
or into arbitrary output
files. Katib collects the metrics using a sidecar container. A sidecar is
a utility container that supports the main container in the Kubernetes Pod.
To define the metrics collector for your experiment:
-
Specify the collector type in the
collector
field. Katib’s metrics collector supports the following collector types:StdOut
: Katib collects the metrics from the operating system’s default output location (standard output).File
: Katib collects the metrics from an arbitrary file, which you specify in thesource
field.TensorFlowEvent
: Katib collects the metrics from a directory path containing a tf.Event. You should specify the path in thesource
field.Custom
: Specify this value if you need to use custom way to collect metrics. You must define your custom metrics collector container in thecollector.customCollector
field.None
: Specify this value if you don’t need to use Katib’s metrics collector. For example, your training code may handle the persistent storage of its own metrics.
-
Specify the metrics output location in the
source
field. See the MetricsCollectorSpec type for default values. -
Write code in your training container to print metrics in the format specified in the
metricsCollectorSpec.source.filter.metricsFormat
field. The default format is([\w|-]+)\s*=\s*((-?\d+)(\.\d+)?)
. Each element is a regular expression with two subexpressions. The first matched expression is taken as the metric name. The second matched expression is taken as the metric value.For example, using the default metrics format, if the name of your objective metric is
loss
and the metrics arerecall
andprecision
, your training code should print the following output:epoch 1: loss=0.3 recall=0.5 precision=0.4 epoch 2: loss=0.2 recall=0.55 precision=0.5
Running the experiment
You can run a Katib experiment from the command line or from the Katib UI.
Running the experiment from the command line
You can use kubectl to launch an experiment from the command line:
kubectl apply -f <your-path/your-experiment-config.yaml>
For example, run the following command to launch an experiment using the random algorithm example:
kubectl apply -f https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1alpha3/random-example.yaml
Check the experiment status:
kubectl -n kubeflow describe experiment <your-experiment-name>
For example, to check the status of the random algorithm example:
kubectl -n kubeflow describe experiment random-example
Running the experiment from the Katib UI
Instead of using the command line, you can submit an experiment from the Katib UI. The following steps assume you want to run a hyperparameter tuning experiment. If you want to run a neural architecture search, access the NAS section of the UI (instead of the HP section) and then follow a similar sequence of steps.
To run a hyperparameter tuning experiment from the Katib UI:
-
Follow the getting-started guide to access the Katib UI.
-
Click Hyperparameter Tuning on the Katib home page.
-
Open the Katib menu panel on the left, then open the HP section and click Submit:
-
Click on the right-hand panel to close the menu panel. You should see tabs offering you the following options:
-
YAML file: Choose this option to supply an entire YAML file containing the configuration for the experiment.
<img src="/docs/images/katib-deploy-yaml.png" alt="UI tab to paste a YAML configuration file" class="mt-3 mb-3 border border-info rounded">
-
Parameters: Choose this option to enter the configuration values into a form.
<img src="/docs/images/katib-deploy-form.png" alt="UI form to deploy a Katib experiment" class="mt-3 mb-3 border border-info rounded">
View the results of the experiment in the Katib UI:
-
Open the Katib menu panel on the left, then open the HP section and click Monitor:
-
Click on the right-hand panel to close the menu panel. You should see the list of experiments:
-
Click the name of your experiment. For example, click random-example.
-
You should see a graph showing the level of accuracy for various combinations of the hyperparameter values. For example, the graph below shows learning rate, number of layers, and optimizer:
-
Below the graph is a list of trials that ran within the experiment. Click a trial name to see the trial data.
Next steps
-
See how to run the random algorithm and other Katib examples in the getting-started guide.
-
For an overview of the concepts involved in hyperparameter tuning and neural architecture search, read the introduction to Katib.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.