Kumo supports running multiple training and prediction jobs in parallel. To enable job concurrency, two configuration steps must be in place - one on the Kumo application side and one within your Snowflake account.
Kumo must enable parallel job execution for your environment.
This step is performed by the Kumo team. No action is required from your side other than expressing the intent to your Kumo point of contact.
If using Kumo-managed compute pools (default installation):
The Snowflake warehouse associated with the Kumo connector must have:
Copy
Ask AI
MAX_CLUSTER_COUNT > 1
MAX_CLUSTER_COUNT should be set to number of parallel jobs you intend to run. This allows Snowflake to automatically scale out when Kumo submits multiple concurrent jobs.
If using Self-managed compute pools (pre-created by your team):
If your organization supplied the compute pools instead of allowing the Kumo app to create them, ensure that the pool can scale to the level of desired parallelism. For example:
Copy
Ask AI
ALTER COMPUTE POOL <pool_name> SET MAX_NODES = <desired_parallel_jobs>;
A Snowflake administrator can quickly check the current setup using:
Copy
Ask AI
SHOW WAREHOUSES;DESCRIBE WAREHOUSE <warehouse_name>;SHOW COMPUTE POOLS;
Once your Snowflake resources are configured correctly and Kumo has enabled concurrency on your environment, training and prediction jobs will be able to execute in parallel.