Skip to main content
Version: v1.1.1

Batch scoring

This page guides you on how to use the H2O MLOps Python client for batch scoring.

For more information about batch scoring and the supported source and sink types, see Batch scoring.

Prerequisites

Before you begin:

  1. Connect to H2O MLOps. For instructions, see Connect to H2O MLOps.
  2. Create a workspace. For instructions, see Create a workspace.
  3. Register a model. For instructions, see Manage models.
  4. (Optional) If using Feature Store as a source or sink, ensure Feature Store is configured in your H2O MLOps deployment. For details, see Batch scoring.

Configure the input source

To list available source connectors, run:

mlops.batch_connectors.source_specs.list()

Use the following code to configure the input source:

source = h2o_mlops.options.BatchSourceOptions(
spec_uid="s3",
config={
"region": "us-west-2",
"accessKeyID": credentials['AccessKeyId'],
"secretAccessKey": credentials['SecretAccessKey'],
"sessionToken": credentials['SessionToken'],
},
mime_type=h2o_mlops.types.MimeType.CSV,
location="s3://<bucket-name>/<path-to-input-file>.csv",
)
note

Public S3 buckets are also supported as an input source. To read from a public S3 bucket, leave the access key and secret key fields empty. Only the input source supports public S3 buckets.

Configure the output location

To list available sink connectors, run:

mlops.batch_connectors.sink_specs.list()

This command returns schema details, supported paths, and MIME types.

Set up the output location where the batch scoring results will be stored:

output_location = location="s3://<bucket-name>/<path-to-output-directory>/" + datetime.now().strftime("%Y%m%d-%H%M%S")
sink = h2o_mlops.options.BatchSinkOptions(
spec_uid="s3",
config={
"region": "us-west-2",
"accessKeyID": credentials['AccessKeyId'],
"secretAccessKey": credentials['SecretAccessKey'],
"sessionToken": credentials['SessionToken'],
},
mime_type=h2o_mlops.types.MimeType.JSONL,
location=output_location,
)

Create batch scoring job

First, retrieve the scoring runtime for the model:

scoring_runtime = model.experiment().scoring_runtimes[0]

Create the batch scoring job using the source and sink variables defined in the previous sections:

job = workspace.batch_scoring_jobs.create(
source=source,
sink=sink,
model=model,
scoring_runtime=scoring_runtime,
kubernetes_options=h2o_mlops.options.BatchKubernetesOptions(
replicas=2,
min_replicas=1,
),
mini_batch_size=100, # Number of rows sent per request during batch processing
name="DEMO JOB",
)

Retrieve the job ID:

job.uid

Include input features or an ID field along with the output

When you create a batch scoring job, you can include input data in the output for easier comparison.

You can use only one of these options:

  1. Set model_request_parameters.output_fields_type=h2o_mlops.types.OutputFieldsType.INCLUDE_ALL_INPUT_FEATURES to include all input features.

Example:

job = workspace.batch_scoring_jobs.create(
source=source,
sink=sink,
model=model,
scoring_runtime=scoring_runtime,
kubernetes_options=h2o_mlops.options.BatchKubernetesOptions(
replicas=2,
min_replicas=1,
),
mini_batch_size=100, # Number of rows sent per request during batch processing
model_request_parameters=h2o_mlops.options.ModelRequestParameters(
output_fields_type=h2o_mlops.types.OutputFieldsType.INCLUDE_ALL_INPUT_FEATURES,
),
name="DEMO JOB",
)
  1. Set model_request_parameters.id_field and model_request_parameters.output_fields_type=h2o_mlops.types.OutputFieldsType.INCLUDE_ID to include only one identifier column.

Example:

job = workspace.batch_scoring_jobs.create(
source=source,
sink=sink,
model=model,
scoring_runtime=scoring_runtime,
kubernetes_options=h2o_mlops.options.BatchKubernetesOptions(
replicas=2,
min_replicas=1,
),
mini_batch_size=100, # Number of rows sent per request during batch processing
model_request_parameters=h2o_mlops.options.ModelRequestParameters(
output_fields_type=h2o_mlops.types.OutputFieldsType.INCLUDE_ID,
id_field="id_field_name", # Field name in the model schema, for example, "age"
),
name="DEMO JOB",
)

Wait for job completion

During the execution of the following code, you can view the log output from both the scorer and the batch scoring job.

job.wait()

By default, this command will print logs while waiting. If you want to wait for job completion without printing any logs, use:

job.wait(logs=False)

List all jobs

workspace.batch_scoring_jobs.list()

Retrieve a job by ID

workspace.batch_scoring_jobs.get(uid=...)

Cancel a job

job.cancel()

By default, this command blocks until the job is fully canceled. If you want to cancel without waiting for completion, use:

job.cancel(wait=False)

Delete a job

job.delete()

Feedback