Skip to main content
Kumo supports running multiple training and prediction jobs in parallel. To enable job concurrency, two configuration steps must be in place - one on the Kumo application side and one within your Snowflake account.

1. Kumo Application Configuration

Kumo must enable parallel job execution for your environment.
This step is performed by the Kumo team. No action is required from your side other than expressing the intent to your Kumo point of contact.

2. Snowflake Configuration

Your Snowflake administrator must ensure that the compute environment used by the Kumo connector can scale beyond a single cluster or node.

If using Kumo-managed compute pools (default installation):

The Snowflake warehouse associated with the Kumo connector must have:
MAX_CLUSTER_COUNT > 1
MAX_CLUSTER_COUNT should be set to number of parallel jobs you intend to run. This allows Snowflake to automatically scale out when Kumo submits multiple concurrent jobs.

If using Self-managed compute pools (pre-created by your team):

If your organization supplied the compute pools instead of allowing the Kumo app to create them, ensure that the pool can scale to the level of desired parallelism. For example:
ALTER COMPUTE POOL <pool_name>
  SET MAX_NODES = <desired_parallel_jobs>;

3. Verifying Configuration

A Snowflake administrator can quickly check the current setup using:
SHOW WAREHOUSES;
DESCRIBE WAREHOUSE <warehouse_name>;
SHOW COMPUTE POOLS;
Once your Snowflake resources are configured correctly and Kumo has enabled concurrency on your environment, training and prediction jobs will be able to execute in parallel.