Comment on page
Connect multiple clusters to Pipekit, and run workflows on different Kubernetes clusters while managing access and viewing the workflow results in one central dashboard.
Choose an appropriate name and description for your cluster. It is recommended to use a name that is all lowercase with no whitespace.
Choose which namespace you intend to install the Pipekit Agent. Common namespaces are
pipekit, although you can choose any namespace you wish, including the namespace where Argo Workflows resides (by default,
Submitwhen you are ready to proceed.
You will be presented with a unique API key for your cluster. You will need this API key to install the Pipekit Agent on your cluster. It will only be shown to you once, so make sure to copy it and store it somewhere safe. You can always generate a new API key if you lose it, but this will immediately invalidate the old key.
Your browser will have automatically downloaded the YAML manifest for you.
Once you have installed the Pipekit Agent, the
Waiting for cluster to come onlineindicator will turn green and you can click
You can see all your clusters in the Clusters tab in Pipekit. Clusters with a red indicator are not currently able to communicate with the Pipekit API. A green indicator denotes a successfully connected agent. If you have installed the Pipekit Agent, but do not see the green indicator, check the logs of the Pipekit Agent pod for errors.
If you click a given cluster, you can view all runs across all Pipes for that cluster.
The version of the installed Pipekit Agent is shown to the left of the red/green connection indicator. It is important to keep your Pipekit Agent up to date, as it will ensure you have the latest features and bug fixes. The Pipekit Agent is always released alongside the Pipekit CLI and Helm chart with the same version number. You can check the latest version of the Pipekit Agent by either looking at available Pipekit CLI versions, available Helm chart versions or by looking at the Pipekit Agent image tags on Docker Hub.
You can modify a cluster by clicking on the cluster name. This will take you to the cluster details page where you can navigate to the Settings tab to change the cluster name, description and API key. You can also make the cluster inactive or delete it from here.
If required, you can generate a new API key for your cluster by clicking "Generate New API Key" on the cluster settings tab. This will immediately invalidate the old API key.
By default, Pipekit operates a first-in-first-out (FIFO) queue by treating the priority of all submitted workflows as the same. This means that if you submit two workflows to Pipekit, the first workflow will be submitted to your cluster before the second workflow. If you submit a third workflow, it will be submitted after the second workflow.
At scale, this may be problematic as short-running jobs may get backed up behind longer-running jobs and this may not meet your business needs. You can define priority groups for workflows at a cluster level by clicking on the cluster name and navigating to the Queuing tab.
By default, all workflows are given a priority of
3. If you want some workflows to have a higher priority, enter an appropriate name in the Workflow Group ID field (e.g.
highest-priority) and select
1 (Highest)in the dropdown before saving.
Any workflows with the label
workflows.pipekit.io/workflow_group_id: "highest-priority"running in this cluster will be given a higher priority than other workflows.
Similarly, if you wish to give some workflows a lower priority, enter an appropriate name in the Workflow Group ID field (e.g.
lowest-priority) and select
5 (Lowest)in the dropdown before saving. Again, any workflows with the label
workflows.pipekit.io/workflow_group_id: "lowest-priority"running in this cluster will be given a lower priority than other workflows.
The priority is given to the submission of the workflow and not to the pods within the workflow. For the avoidance of doubt, setting the queue priority does not increase the Pod Priority of the pods within the workflow. This is something that should be configured in the workflow and the kubernetes cluster itself.
Last modified 1mo ago