Installation
The deployment of the orchestrator involves multiple independent components, each with its unique installation process. In an OpenShift Cluster, the Red Hat Catalog provides an operator that can handle the installation for you. This installation process is modular, as the CRD exposes various flags that allow you to control which components to install. For a vanilla Kubernetes, there is a helm chart that installs the orchestrator compoments.
The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.
In addition to the Orchestrator deployment, we offer several workflows (linked below) that can be deployed using their respective installation methods.
1 - Orchestrator on OpenShift
Installing the Orchestrator is facilitated through an operator available in the Red Hat Catalog in the OLM package. This operator is responsible for installing all of the Orchestrator components.
The Orchestrator is based on the SonataFlow and the Serverless Workflow technologies to design and manage the workflows.
The Orchestrator plugins are deployed on a Red Hat Developer Hub
instance, which serves as the frontend.
When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.
To utilize Backstage capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.
Orchestrator Documentation
For comprehensive documentation on the Orchestrator, please visit https://www.parodos.dev.
Installing the Orchestrator Helm Operator
Deploy the Orchestrator solution suite in an OCP cluster using the Orchestrator operator.
The operator installs the following components onto the target OpenShift cluster:
- RHDH (Red Hat Developer Hub) Backstage
- OpenShift Serverless Logic Operator (with Data-Index and Job Service)
- OpenShift Serverless Operator
- Knative Eventing
- Knative Serving
- (Optional) An ArgoCD project named
orchestrator
. Requires an pre-installed ArgoCD/OpenShift GitOps instance in the cluster. Disabled by default - (Optional) Tekton tasks and build pipeline. Requires an pre-installed Tekton/OpenShift Pipelines instance in the cluster. Disabled by default
Important Note for ARM64 Architecture Users
Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the ARM64 architecture. Consequently, deployment of this operator on an OpenShift Local cluster on MacBook laptops with M1/M2 chips is not supported.
Prerequisites
- Logged in to a Red Hat OpenShift Container Platform (version 4.13+) cluster as a cluster administrator.
- OpenShift CLI (oc) is installed.
- Operator Lifecycle Manager (OLM) has been installed in your cluster.
- Your cluster has a default storage class provisioned.
- A GitHub API Token - to import items into the catalog, ensure you have a
GITHUB_TOKEN
with the necessary permissions as detailed here.- For classic token, include the following permissions:
- repo (all)
- admin:org (read:org)
- user (read:user, user:email)
- workflow (all) - required for using the software templates for creating workflows in GitHub
- For Fine grained token:
- Repository permissions: Read access to metadata, Read and Write access to actions, actions variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository hooks, secrets, security events, and workflows.
- Organization permissions: Read access to members, Read and Write access to organization administration, organization hooks, organization projects, and organization secrets.
⚠️Warning: Skipping these steps will prevent the Orchestrator from functioning properly.
Deployment with GitOps
If you plan to deploy in a GitOps environment, make sure you have installed the ArgoCD/Red Hat OpenShift GitOps
and the Tekton/Red Hat Openshift Pipelines Install
operators following these instructions.
The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow repository and to automatically trigger the Tekton pipelines as needed.
ArgoCD/OpenShift GitOps
operator
- Ensure at least one instance of
ArgoCD
exists in the designated namespace (referenced by ARGOCD_NAMESPACE
environment variable). Example here - Validated API is
argoproj.io/v1alpha1/AppProject
Tekton/OpenShift Pipelines
operator
- Validated APIs are
tekton.dev/v1beta1/Task
and tekton.dev/v1/Pipeline
- Requires ArgoCD installed since the manifests are deployed in the same namespace as the ArgoCD instance.
Remember to enable argocd and tekton in your CR instance.
Detailed Installation Guide
From OperatorHub
- Deploying PostgreSQL reference implementation
- If you do not have a PostgreSQL instance in your cluster
you can deploy the PostgreSQL reference implementation by following the steps here. - If you already have PostgreSQL running in your cluster
ensure that the default settings in the PostgreSQL values file match those provided in the Orchestrator values file.
- Install Orchestrator operator
- Go to OperatorHub in your OpenShift Console.
- Search for and install the Orchestrator Operator.
- Create an Orchestrator instance
- Once the Orchestrator Operator is installed, navigate to Installed Operators.
- Select Orchestrator Operator.
- Click on Create Instance to deploy an Orchestrator instance.
- Verify resources and wait until they are running
From console run the following command get the necessary wait commands:
oc describe orchestrator orchestrator-sample -n openshift-operators | grep -A 10 "Run the following commands to wait until the services are ready:"
\
The command will return an output similar to the one below, which lists several oc wait commands. This depends on your specific cluster.
oc wait -n openshift-serverless deploy/knative-openshift --for=condition=Available --timeout=5m
oc wait -n knative-eventing knativeeventing/knative-eventing --for=condition=Ready --timeout=5m
oc wait -n knative-serving knativeserving/knative-serving --for=condition=Ready --timeout=5m
oc wait -n openshift-serverless-logic deploy/logic-operator-rhel8-controller-manager --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra sonataflowplatform/sonataflow-platform --for=condition=Succeed --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-data-index-service --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-jobs-service --for=condition=Available --timeout=5m
oc get networkpolicy -n sonataflow-infra
Copy and execute each command from the output in your terminal. These commands ensure that all necessary services and resources in your OpenShift environment are available and running correctly.
If any service does not become available, verify the logs for that service or consult troubleshooting steps.
With Helm (deprecated)
Deploy the PostgreSQL reference implementation for persistence support in SonataFlow following these instructions
Create a namespace for the Orchestrator solution:
oc new-project orchestrator
Create a namespace for the Red Hat Developer Hub Operator (RHDH Operator):
oc new-project rhdh-operator
Download the setup script from the github repository and run it to create the RHDH secret and label the GitOps namespaces:
wget https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/main/hack/setup.sh -O /tmp/setup.sh && chmod u+x /tmp/setup.sh
Run the script:
/tmp/setup.sh --use-default
NOTE: If you don’t want to use the default values, omit the --use-default
and the script will prompt you for input.
The contents will vary depending on the configuration in the cluster. The following list details all the keys that can appear in the secret:
BACKEND_SECRET
: Value is randomly generated at script execution. This is the only mandatory key required to be in the secret for the RHDH Operator to start.K8S_CLUSTER_URL
: The URL of the Kubernetes cluster is obtained dynamically using oc whoami --show-server
.K8S_CLUSTER_TOKEN
: The value is obtained dynamically based on the provided namespace and service account.GITHUB_TOKEN
: This value is prompted from the user during script execution and is not predefined.GITHUB_CLIENT_ID
and GITHUB_CLIENT_SECRET
: The value for both these fields are used to authenticate against GitHub. For more information open this link.ARGOCD_URL
: This value is dynamically obtained based on the first ArgoCD instance available.ARGOCD_USERNAME
: Default value is set to admin
.ARGOCD_PASSWORD
: This value is dynamically obtained based on the first ArgoCD instance available.
Keys will not be added to the secret if they have no values associated. So for instance, when deploying in a cluster without the GitOps operators, the ARGOCD_URL
, ARGOCD_USERNAME
and ARGOCD_PASSWORD
keys will be omited in the secret.
Sample of a secret created in a GitOps environment:
$> oc get secret -n rhdh-operator -o yaml backstage-backend-auth-secret
apiVersion: v1
data:
ARGOCD_PASSWORD: ...
ARGOCD_URL: ...
ARGOCD_USERNAME: ...
BACKEND_SECRET: ...
GITHUB_TOKEN: ...
K8S_CLUSTER_TOKEN: ...
K8S_CLUSTER_URL: ...
kind: Secret
metadata:
creationTimestamp: "2024-05-07T22:22:59Z"
name: backstage-backend-auth-secret
namespace: rhdh-operator
resourceVersion: "4402773"
uid: 2042e741-346e-4f0e-9d15-1b5492bb9916
type: Opaque
Use the following manifest to install the operator in an OCP cluster:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: orchestrator-operator
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: orchestrator-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Run the following commands to determine when the installation is completed:
wget https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/main/hack/wait_for_operator_installed.sh -O /tmp/wait_for_operator_installed.sh && chmod u+x /tmp/wait_for_operator_installed.sh && /tmp/wait_for_operator_installed.sh
During the installation process, Kubernetes cronjobs are created by the operator to monitor the lifecycle of the CRs managed by the operator: RHDH operator, OpenShift Serverless operator and OpenShift Serverless Logic operator. When deleting one of the previously mentioned CRs, a job is triggered that ensures the CR is removed before the operator is.
In case of any failure at this stage, these jobs remain active, facilitating administrators in retrieving detailed diagnostic information to identify and address the cause of the failure.
Note: that every minute on the clock a job is triggered to reconcile the CRs with the orchestrator resource values. These cronjobs are deleted when their respective features (e.g. rhdhOperator.enabled=false
) are removed or when the orchestrator resource is removed. This is required because the CRs are not managed by helm due to the CRD dependency pre availability to the deployment of the CR.
Apply the Orchestrator custom resource (CR) on the cluster to create an instance of RHDH and resources of OpenShift Serverless Operator and OpenShift Serverless Operator Logic.
Make any changes to the CR before applying it, or test the default Orchestrator CR:
oc apply -n orchestrator -f https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/refs/heads/main/config/samples/_v1alpha1_orchestrator.yaml
Additional Workflow Namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g., sonataflow-infra), several essential steps must be followed:
Label the Workflow Namespace:
To allow Sonataflow services to accept traffic from workflows, apply the following label to the desired workflow namespace:
oc label ns $ADDITIONAL_NAMESPACE rhdh.redhat.com/workflow-namespace=""
Identify the RHDH Namespace:
Retrieve the namespace where RHDH is running by executing:
Store the namespace value in RHDH_NAMESPACE in the Network Policy manifest below.
Identify the Sonataflow Services Namespace:
Check the namespace where Sonataflow services are deployed:
oc get sonataflowclusterplatform -A
If there is no cluster platform, check for a namespace-specific platform:
oc get sonataflowplatform -A
Store the namespace value in SONATAFLOW_PLATFORM_NAMESPACE.
Set Up a Network Policy:
Configure a network policy to allow traffic only between RHDH, Sonataflow services, and the workflows. The policy can be derived from here
oc create -f <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-rhdh-to-sonataflow-and-workflows
# Sonataflow and Workflows are using the same namespace.
namespace: $ADDITIONAL_NAMESPACE
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
# Allow RHDH namespace to communicate with workflows.
kubernetes.io/metadata.name: $RHDH_NAMESPACE
- namespaceSelector:
matchLabels:
# Allow Sonataflow services to communicate with workflows.
kubernetes.io/metadata.name: $SONATAFLOW_PLATFORM_NAMESPACE
EOF
Ensure Persistence for the Workflow:
If persistence is required, follow these steps:
By following these steps, the workflow will have the necessary credentials to access PostgreSQL and will correctly reference the service in a different namespace.
GitOps environment
See the dedicated document
Deploying PostgreSQL reference implementation
See here
ArgoCD and workflow namespace
If you manually created the workflow namespaces (e.g., $WORKFLOW_NAMESPACE
), run this command to add the required label that allows ArgoCD to deploy instances there:
oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE
Workflow installation
Follow Workflows Installation
Cleanup
/!\ Before removing the orchestrator, make sure you have first removed any installed workflows. Otherwise the deletion may become hung in a terminating state.
To remove the operator from the cluster, delete the subscription:
oc delete subscriptions.operators.coreos.com orchestrator-operator -n openshift-operators
Note that the CRDs created during the installation process will remain in the cluster.
To clean the rest of the resources, run:
oc get crd -o name | grep -e sonataflow -e rhdh | xargs oc delete
oc delete namespace orchestrator sonataflow-infra rhdh-operator
If you want to remove knative related resources, you may also run:
oc get crd -o name | grep -e knative | xargs oc delete
2 - Orchestrator on Kubernetes
The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a kind installation.
Here’s a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 16443
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- |
kind: KubeletConfiguration
localStorageCapacityIsolation: true
extraPortMappings:
- containerPort: 80
hostPort: 9090
protocol: TCP
- containerPort: 443
hostPort: 9443
protocol: TCP
- role: worker
Save this file as kind-config.yaml
, and now run:
kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
The cluster should be up and running with Contour ingress-controller installed, so localhost:9090 will direct the traffic to Backstage, because of the ingress created by the helm chart on port 80.
Orchestrator-k8s helm chart
This chart will install the Orchestrator and all its dependencies on kubernetes.
THIS CHART IS NOT SUITED FOR PRODUCTION PURPOSES, you should only use it for development or tests purposes
The chart deploys:
Usage
helm repo add orchestrator https://parodos-dev.github.io/orchestrator-helm-chart
helm install orchestrator orchestrator/orchestrator-k8s
Configuration
All of the backstage app-config is derived from the values.yaml.
Secrets as env vars:
To use secret as env vars, like the one used for the notification, see charts/Orchestrator-k8s/templates/secret.yaml
Every key in that secret will be available in the app-config for resolution.
Development
git clone https://github.com/parodos-dev.github.io/orchestrator-helm-chart
cd orchestrator-helm-chart/charts/orchestrator-k8s
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
helm repo add postgresql-persistent https://sclorg.github.io/helm-charts
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart
helm repo add workflows https://parodos.dev/serverless-workflows-config
helm dependencies build
helm install orchestrator .
The output should look like that
$ helm install orchestrator .
Release "orchestrator" has been upgraded. Happy Helming!
NAME: orchestrator
LAST DEPLOYED: Tue Sep 19 18:19:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
This chart will install RHDH-backstage(RHDH upstream) + Serverless Workflows.
To get RHDH's route location:
$ oc get route orchestrator-white-backstage -o jsonpath='https://{ .spec.host }{"\n"}'
To get the serverless workflow operator status:
$ oc get deploy -n sonataflow-operator-system
To get the serverless workflows status:
$ oc get sf
The chart notes will provide more information on:
- route location of backstage
- the sonata operator status
- the sonata workflow deployed status
3 - Orchestrator on existing RHDH instance
When RHDH is already installed and in use, reinstalling it is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:
- Utilize the Orchestrator operator to install the requisite components, such as the OpenShift Serverless Logic Operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
- Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
- Import the Orchestrator software templates into the Backstage catalog.
Prerequisites
- RHDH is already deployed with a running Backstage instance.
- Software templates for workflows requires GitHub provider to be configured.
- Ensure that a PostgreSQL database is available and that you have credentials to manage the tablespace (optional).
- For your convenience, a reference implementation is provided.
- If you already have a PostgreSQL database installed, please refer to this note regarding default settings.
In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.
The installation steps are detailed here.
4 - Workflows
In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.
4.1 - Deploy From Helm Repository
Orchestrator Workflows Helm Repository
This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.
The repository includes a variety of serverless workflows, such as:
- Greeting: A basic example workflow to demonstrate functionality.
- Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
- Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.
- …
Usage
Prerequisites
To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository
Installation
helm repo add orchestrator-workflows https://parodos.dev/serverless-workflows-config
View available workflows on the Helm repository:
helm search repo orchestrator-workflows
The expected result should look like (with different versions):
NAME CHART VERSION APP VERSION DESCRIPTION
orchestrator-workflows/greeting 0.4.2 1.16.0 A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube 0.2.16 1.16.0 A Helm chart to deploy the move2kube workflow.
orchestrator-workflows/mta 0.2.16 1.16.0 A Helm chart for MTA serverless workflow
orchestrator-workflows/workflows 0.2.24 1.16.0 A Helm chart for serverless workflows
...
You can install the workflows following their respective README
Installing workflows in additional namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.
Version Compatability
The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it.
The list below outlines the compatibility between the workflows and Orchestrator versions:
Workflows | Chart Version | Orchestrator Operator Version |
---|
move2kube | 1.3.x | 1.3.x |
create-ocp-project | 1.3.x | 1.3.x |
request-vm-cnv | 1.3.x | 1.3.x |
modify-vm-resources | 1.3.x | 1.3.x |
mta-v7 | 1.3.x | 1.3.x |
mtv-migration | 1.3.x | 1.3.x |
mtv-plan | 1.3.x | 1.3.x |
——————– | ————— | ———————- |
mta-analysis | 0.3.x | 1.2.x |
move2kube | 0.3.x | 1.2.x |
create-ocp-project | 0.1.x | 1.2.x |
request-vm-cnv | 0.1.x | 1.2.x |
modify-vm-resources | 0.1.x | 1.2.x |
mta-v6 | 0.2.x | 1.2.x |
mta-v7 | 0.2.37 | 1.2.x |
mtv-migration | 0.0.x | 1.2.x |
mtv-plan | 0.0.13 | 1.2.x |
Helm index
https://www.parodos.dev/serverless-workflows-config/index.yaml