CLI

The Pipekit CLI allows you to interact with Pipekit services without leaving your terminal.

Installation

To install the Pipekit CLI, fetch the precompiled binaries or packages from our releases page, or use the following install instructions, depending on your OS:

MacOS & Linux (Homebrew)

brew install pipekit/tap/cli

Windows (Scoop)

scoop bucket add pipekit https://github.com/pipekit/scoop.git
scoop install pipekit/cli

Release Notes

Release notes are published on the Pipekit Releases site.

Docker Container

We produce a Wolfi-based Docker image that contains the Pipekit CLI. To use it you can run a docker run command appended with the pipekit arguments you require. You can log in to pipekit non-interactively by providing the required environment variables:

The latest tag is always pinned to the latest released version of the Pipekit CLI. (To find the a specific version, check the tags on Docker Hub.

Containers are available in linux/amd64 and linux/arm64 variants. Both use Wolfi Linux as the base image.

Or you can copy the binary into a container that you control. See "Used within another container" for an example.

Used within a Workflow

It would be more common to use the container within a workflow. An example of this is shown below:

Used within another container

You can also use the Pipekit CLI within another container. This example Dockerfile shows how you could install the Pipekit CLI into a Jupyter notebook container:

Pipekit CLI container Software Bill of Materials (SBOM)

An SBOM for the CLI container is embedded within the container image in SPDX format. Further information is available on the SBOM page.

Authentication

You need to authenticate before using the Pipekit CLI to submit workflows. This is done using your username & password, or using single sign-on (SSO), depending on how your account was created. Running the login command, you're prompted to choose one of the two mentioned methods:

After you've successfully logged in, either you provided your username and password or you've gone through the SSO pathway, selecting your identity provider and following the redirects through your browser, you should see the following message in your terminal:

Pipekit CLI stores your access token in your home directory, under ~/.pipekit/.

Non-Interactive Login

You can also log in non-interactively, by providing your username and password as arguments to the login command:

or

Alternatively, you can provide your credentials using environment variables:

Logging

The Pipekit CLI supports logging through the --log-level and --log-format flags. The default log level is info and the default log format is text.

Log level can be one of: debug, info, warn, error, fatal. Log format can be one of: text, json.

Submitting a workflow

To submit a workflow yaml for execution, use the submit command. Upon submission, Pipekit will run this workflow on the specified cluster; creating a new Pipe if none of the existing Pipes have the required name. Pipes can be automatically created when a workflow is submitted, in that case, they will be named after the value of Argo Workflow's generateName property. All future submissions of the workflow with the same generateName will create new runs grouped under the same Pipe.

To submit a workflow, you need to provide the path to the workflow yaml as an argument, and a list of flags, out of which only cluster-name is required, example:

Flag
Shorthand
Type
Description

--namespace

-n

string

name of the namespace to submit the workflow to

--cluster-name

-c

string

name of the cluster to submit the workflow to

--open-ui

bool

open UI for the submitted workflow

--parameter

-p

stringArray (key=value)

pass input parameter(s)

--wait

-w

bool

wait for the submitted workflow to complete

--pipe-name

-d

string

name of the pipe to submit the workflow to

--json

bool

output the run details in JSON format

Getting information about a run (executed workflow)

To get information about a run, use the get command. For example:

You can also get information about a run by providing the cluster name, namespace and workflow name. In case there are multiple runs for the given workflow name, the latest one will be returned. For example:

Flag
Shorthand
Type
Description

--run-uuid

-r

string

run UUID of the workflow you want to get

--cluster-name

-c

string

name of the cluster where workflow is located

--namespace

-n

string

k8s namespace where workflow is/was running

--workflow-name

-w

string

name of the workflow

Listing clusters, pipes and runs

The Pipekit CLI offers a list command that can be used to fetch and list pipekit entities, such as cluster, Pipes and runs.

Listing clusters

Listing pipes

To list all Pipes that have at least one run on a given cluster:

Flag
Shorthand
Type
Description

--all

-A

bool

get all Pipes, both enabled and disabled

--cluster-name

-c

string

the name of the cluster to list from (required)

--enabled

bool

enabled/disabled switch (default true)

Listing runs

To list all runs on a given cluster:

Flag
Shorthand
Type
Description

--cluster-name

-c

string

name of the cluster to list from

--all

-A

bool

get all runs, regardless of the status

--statuses

-s

stringArray

get runs with a given status (defaults to running)

Run actions

There are certain actions that can be applied to runs after the workflow is submitted, those are stop, terminate and restart.

Stop and Terminate

To stop a running workflow:

To terminate a running workflow (stop immediately without running the exit handlers):

Both of these commands have the same flags:

Flag
Shorthand
Type
Description

--run-uuid

-r

string

run UUID of the running workflow you want to stop/terminate (required)

Restart

To restart a run, resubmitting the workflow for execution:

Flag
Shorthand
Type
Description

--run-uuid

-r

string

run UUID of the run you want to restart (required)

--open-ui

bool

open the UI of the resubmitted workflow

--wait

-w

bool

wait for the workflow to complete

Import from Workflow Archive

An admin can import archived workflows from an Argo Workflow Archive to Pipekit. This action will import the workflows and their metadata into Pipekit.

Use the import workflows command in the Pipekit CLI:

Flag
Shorthand
Type
Description

--cluster-name

-c

string

name of the cluster to assign workflows to (required)

--argoServerURI

string

Argo Server URI (defaults to localhost:2746)

--argoAuthToken

string

Argo Server authentication token

--batchSize

integer

number of workflows to fetch from the Workflow Archive per API call (defaults to 10)

--argoServerSubPath

string

sub path used when connecting to Argo Server

--insecure

-k

bool

skip TLS verification (defaults to false)

--useSSL

bool

use SSL when connecting to Argo Server (defaults to true)

Hera

Users can run the pipekit hera command to get a token to run their Hera workflows (and CronWorkflows) through Pipekit using the Pipekit Python SDK.

Use the hera command in the pipekit CLI:

See the Pipekit Python SDK documentation for more details and examples.

Logs

To observe pod logs directly in the Pipekit CLI, use the logs command:

Logs of a given pipe run can be filtered by the pod name and container name.

Flag
Shorthand
Type
Description

--run-uuid

-r

string

UUID of the run (required)

--container

-c

string

name of the container to filter logs by

--pod

-p

string

name of the pod to filter logs by

--node-id

-n

string

id of the workflow node to filter logs by

--follow

-f

bool

follow logs stream until run is finished

Last updated