This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

The deployment of the orchestrator involves multiple independent components, each with its unique installation process. Presently, our supported method is through a specialized Helm chart designed for deploying the orchestrator on either OpenShift or Kubernetes environments. This installation process is modular, allowing optional installation of components if they are already present.

The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.

In addition to the Orchestrator deployment, we offer several workflows (linked below) that can be deployed using their respective installation methods.

1 - Orchestrator on Kubernetes

The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a kind installation.

Here’s a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "127.0.0.1"
  apiServerPort: 16443
nodes:
  - role: control-plane
    kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"      
    - |
      kind: KubeletConfiguration
      localStorageCapacityIsolation: true      
    extraPortMappings:
      - containerPort: 80
        hostPort: 9090
        protocol: TCP
      - containerPort: 443
        hostPort: 9443
        protocol: TCP
  - role: worker

Save this file as kind-config.yaml, and now run:

kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'

The cluster should be up and running with Contour ingress-controller installed, so localhost:9090 will direct the traffic to backstage, because of the ingress created by the helm chart on port 80.

Orchestrator-k8s helm chart

This chart will install the Orchestrator and all its dependencies on kubernetes.

THIS CHART IS NOT SUITED FOR PRODUCTION PURPOSES, you should only use it for development or tests purposes

The chart deploys:

Usage

helm repo add orchestrator https://parodos-dev.github.io/orchestrator-helm-chart

helm install orchestrator orchestrator/orchestrator-k8s

Configuration

All of the backstage app-config is derived from the values.yaml.

Secrets as env vars:

To use secret as env vars, like the one used for the notification, see charts/Orchestrator-k8s/templates/secret.yaml Every key in that secret will be available in the app-config for resolution.

Development

git clone https://github.com/parodos-dev.github.io/orchestrator-helm-chart
cd orchestrator-helm-chart/charts/orchestrator-k8s


helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
helm repo add postgresql-persistent https://sclorg.github.io/helm-charts
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart
helm repo add workflows https://parodos.dev/serverless-workflows-config

helm dependencies build
helm install orchestrator .

The output should look like that

$ helm install orchestrator .
Release "orchestrator" has been upgraded. Happy Helming!
NAME: orchestrator
LAST DEPLOYED: Tue Sep 19 18:19:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
This chart will install RHDH-backstage(RHDH upstream) + Serverless Workflows.

To get RHDH's route location:
    $ oc get route orchestrator-white-backstage -o jsonpath='https://{ .spec.host }{"\n"}'

To get the serverless workflow operator status:
    $ oc get deploy -n sonataflow-operator-system 

To get the serverless workflows status:
    $ oc get sf

The chart notes will provide more information on:

  • route location of backstage
  • the sonata operator status
  • the sonata workflow deployed status

2 - Orchestrator on OpenShift

Installing the Orchestrator is facilitated through a Helm chart that is responsible for installing all of the Orchestrator components. The Orchestrator is based on the SonataFlow and the Serverless Workflow technologies to design and manage the workflows. The Orchestrator plugins are deployed on Red Hat Developer Hub instance, serves as the frontend. To utilize Backstage capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.

Orchestrator Helm Chart

Deploy the Orchestrator solution suite using this Helm chart.
The chart installs the following components onto the target OpenShift cluster:

  • RHDH (Red Hat Developer Hub) Backstage
  • OpenShift Serverless Logic Operator (with Data-Index and Job Service)
  • OpenShift Serverless Operator
    • Knative Eventing
    • Knative Serving
  • ArgoCD orchestrator project (optional: disabled by default)
  • Tekton tasks and build pipeline (optional: disabled by default)

Important Note for ARM64 Architecture Users

Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the ARM64 architecture. Consequently, deployment of this helm chart on an OpenShift Local cluster on MacBook laptops with M1/M2 chips is not supported.

Prerequisites

  • Logged in to a Red Hat OpenShift Container Platform (version 4.13+) cluster as a cluster administrator.
  • OpenShift CLI (oc) is installed.
  • Operator Lifecycle Manager (OLM) has been installed in your cluster.
  • Your cluster has a default storage class provisioned.
  • Helm v3.9+ is installed.
  • PostgreSQL database is available with credentials to manage the tablespace (optional).
  • A GitHub API Token - to import items into the catalog, ensure you have a GITHUB_TOKEN with the necessary permissions as detailed here.
    • For classic token, include the following permissions:
      • repo (all)
      • admin:org (read:org)
      • user (read:user, user:email)
      • workflow (all) - required for using the software templates for creating workflows in GitHub
    • For Fine grained token:
      • Repository permissions: Read access to metadata, Read and Write access to actions, actions variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository hooks, secrets, security events, and workflows.
      • Organization permissions: Read access to members, Read and Write access to organization administration, organization hooks, organization projects, and organization secrets.

Installation

  1. Get the Helm chart from one of the following options

    • Pre-Packaged Helm Chart
      If you choose to install the Helm chart from the Helm repository, you’ll be leveraging the pre-packaged version provided by the chart maintainer. This method is convenient and ensures that you’re using a stable, tested version of the chart. Add the repository:

      helm repo add orchestrator https://parodos-dev.github.io/orchestrator-helm-chart
      

      Expect result:

      "orchestrator" has been added to your repositories
      

      Verify the repository is shown:

      helm repo list
      

      Expect result:

      NAME        	URL
      orchestrator	https://parodos-dev.github.io/orchestrator-helm-chart
      
    • Manual Chart Deployment
      By cloning the repository and navigating to the charts directory, you’ll have access to the raw chart files and can customize them to fit your specific requirements.

      git clone git@github.com:parodos-dev/orchestrator-helm-chart.git
      cd orchestrator-helm-chart/charts
      
  2. Create a namespace for the Orchestrator solution:

    oc new-project orchestrator
    
  3. Set GITHUB_TOKEN environment variable

    export GITHUB_TOKEN=<github token>
    
  4. Deploy PostgreSQL reference implementation following these instructions

…without GitOps

  1. Install the orchestrator Helm chart:

    helm upgrade -i orchestrator orchestrator/orchestrator --set rhdhOperator.github.token=$GITHUB_TOKEN
    
  2. Run the commands prompted at the end of the previous step to wait until the services are ready.

… with GitOps

  1. Install Red Hat OpenShift Pipelines and Red Hat OpenShift GitOps operators following these instructions. The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow repository and to automatically trigger the Tekton pipelines as needed. This installation process ensures that all necessary Tekton and ArgoCD resources are provisioned within the same cluster.

  2. Download the setup script from the github repository and run it to to set up environment variables:

    wget https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-chart/stable-1.x/hack/setenv.sh -O /tmp/setenv.sh && chmod u+x /tmp/setenv.sh
    

    This script generates a .env file that contains all the calculated environment variables. Run source .env to utilize these variables.

    NOTE: If you don’t want to use the default values, omit the --use-default and the script will prompt you for input.

    Default values are calculated as follows:

    • WORKFLOW_NAMESPACE: Default value is set to sonataflow-infra.
    • K8S_CLUSTER_URL: The URL of the Kubernetes cluster is obtained dynamically using oc whoami --show-server.
    • K8S_CLUSTER_TOKEN: The value is obtained dynamically based on the provided namespace and service account.
    • GITHUB_TOKEN: This value is prompted from the user during script execution and is not predefined.
    • ARGOCD_NAMESPACE: Default value is set to orchestrator-gitops.
    • ARGOCD_URL: This value is dynamically obtained based on the first ArgoCD instance available.
    • ARGOCD_USERNAME: Default value is set to admin.
    • ARGOCD_PASSWORD: This value is dynamically obtained based on the first ArgoCD instance available.
  3. Install the orchestrator Helm chart:

    helm upgrade -i orchestrator orchestrator/orchestrator --set rhdhOperator.github.token=$GITHUB_TOKEN \
    --set rhdhOperator.k8s.clusterToken=$K8S_CLUSTER_TOKEN --set rhdhOperator.k8s.clusterUrl=$K8S_CLUSTER_URL \
    --set argocd.namespace=$ARGOCD_NAMESPACE --set argocd.url=$ARGOCD_URL --set argocd.username=$ARGOCD_USERNAME \
    --set argocd.password=$ARGOCD_PASSWORD --set argocd.enabled=true --set tekton.enabled=true
    
  4. Run the commands prompted at the end of the previous step to wait until the services are ready.

    Sample output:

    NAME: orchestrator
    LAST DEPLOYED: Fri Mar 29 12:34:59 2024
    NAMESPACE: sonataflow-infra
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Helm Release orchestrator installed in namespace sonataflow-infra.
    
    Components                   Installed   Namespace
    ====================================================================
    Backstage                    YES        rhdh-operator
    Postgres DB - Backstage      NO         rhdh-operator
    Red Hat Serverless Operator  YES        openshift-serverless
    KnativeServing               YES        knative-serving
    KnativeEventing              YES        knative-eventing
    SonataFlow Operator          YES        openshift-serverless-logic
    SonataFlowPlatform           YES        sonataflow-infra
    Data Index Service           YES        sonataflow-infra
    Job Service                  YES        sonataflow-infra
    Tekton pipeline              YES        orchestrator-gitops
    Tekton task                  YES        orchestrator-gitops
    ArgoCD project               YES        orchestrator-gitops
    
    ====================================================================
    Prerequisites check:
    The required CRD tekton.dev/v1beta1/Task is already installed.
    The required CRD tekton.dev/v1/Pipeline is already installed.
    The required CRD argoproj.io/v1alpha1/AppProject is already installed.
    ====================================================================
    
    
    Run the following commands to wait until the services are ready:
    ```console
      oc wait -n openshift-serverless deploy/knative-openshift --for=condition=Available --timeout=5m
      oc wait -n knative-eventing knativeeventing/knative-eventing --for=condition=Ready --timeout=5m
      oc wait -n knative-serving knativeserving/knative-serving --for=condition=Ready --timeout=5m
      oc wait -n openshift-serverless-logic deploy/logic-operator-rhel8-controller-manager --for=condition=Available --timeout=5m
      oc wait -n sonataflow-infra sonataflowplatform/sonataflow-platform --for=condition=Succeed --timeout=5m
      oc wait -n sonataflow-infra deploy/sonataflow-platform-data-index-service --for=condition=Available --timeout=5m
      oc wait -n sonataflow-infra deploy/sonataflow-platform-jobs-service --for=condition=Available --timeout=5m
      oc wait -n rhdh-operator backstage backstage --for=condition=Deployed=True
      oc wait -n rhdh-operator deploy/backstage-backstage --for=condition=Available --timeout=5m
    

During the installation process, Kubernetes jobs are created by the chart to monitor the installation progress and wait for the CRDs to be fully deployed by the operators. In case of any failure at this stage, these jobs remain active, facilitating administrators in retrieving detailed diagnostic information to identify and address the cause of the failure.

Note: that these jobs are automatically deleted after the deployment of the chart is completed.

For installing from OpenShift Developer perspective

Create the HelmChartRepository from CLI (or from OpenShift UI):

cat << EOF | oc apply -f -
apiVersion: helm.openshift.io/v1beta1
kind: HelmChartRepository
metadata:
  name: orchestrator
spec:
  connectionConfig:
    url: 'https://parodos-dev.github.io/orchestrator-helm-chart'
EOF

Follow Helm Chart installation instructions here

Additional information

GitOps environment

See the dedicated document

Prerequisites

In addition to the prerequisites mentioned earlier, it is possible to manually install the following operator:

  • ArgoCD/OpenShift GitOps operator
    • Ensure at least one instance of ArgoCD exists in the designated namespace (referenced by ARGOCD_NAMESPACE environment variable).
    • Validated API is argoproj.io/v1alpha1/AppProject
  • Tekton/OpenShift Pipelines operator
    • Validated APIs are tekton.dev/v1beta1/Task and tekton.dev/v1/Pipeline

Deploying PostgreSQL reference implementation

See here

ArgoCD and workflow namespace

If you manually created the workflow namespaces (e.g., $WORKFLOW_NAMESPACE), run this command to add the required label that allows ArgoCD deploying instances there:

oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE

Workflow installation

Follow Workflows Installation

Cleanup

To remove the installation from the cluster, run:

helm delete orchestrator
release "orchestrator" uninstalled

Note that the CRDs created during the installation process will remain in the cluster. To clean the rest of the resources, run:

oc get crd -o name | grep -e sonataflow -e rhdh | xargs oc delete
oc delete namespace orchestrator

Troubleshooting

Timeout or errors during oc wait commands

If you encounter errors or timeouts while executing oc wait commands, follow these steps to troubleshoot and resolve the issue:

  1. Check Deployment Status: Review the output of the oc wait commands to identify which deployments met the condition and which ones encountered errors or timeouts. For example, if you see error: timed out waiting for the condition on deployments/sonataflow-platform-data-index-service, investigate further using oc describe deployment sonataflow-platform-data-index-service -n sonataflow-infra and oc logs sonataflow-platform-data-index-service -n sonataflow-infra

3 - Workflows

In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed either through a Helm chart or by utilizing Kustomize.

3.1 - Deploy From Helm Repository

Orchestrator Workflows Helm Repository

This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.

The repository includes a variety of serverless workflows, such as:

  • Greeting: A basic example workflow to demonstrate functionality.
  • Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
  • Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.

Usage

Prerequisites

To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Repository

Note With the existing version of the Orchestrator helm chart, all workflows should be created under the sonataflow-infra namespace.

Installation

helm repo add orchestrator-workflows https://parodos.dev/serverless-workflows-config

View available workflows on the Helm repository:

helm search repo orchestrator-workflows

The expected result should look like (with different versions):

NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                      
orchestrator-workflows/greeting 	0.4.2        	1.16.0     	A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube	0.2.16       	1.16.0     	A Helm chart to deploy the move2kube workflow.   
orchestrator-workflows/mta      	0.2.16       	1.16.0     	A Helm chart for MTA serverless workflow         
orchestrator-workflows/workflows	0.2.24       	1.16.0     	A Helm chart for serverless workflows

You can install each workflow separately. For detailed information, please visit the page of each workflow:

Installing workflows in additional namespaces

When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.

Version Compatability

The workflows rely on components included in the Orchestrator chart. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it. The list below outlines the compatibility between the workflows and Orchestrator versions:

WorkflowsChart VersionOrchestrator Chart Version
mta-analysis0.2.x1.0.x
mta-analysis0.3.x1.2.x
move2kube0.2.x1.0.x
move2kube0.3.x1.2.x
create-ocp-project0.1.x1.2.x
request-vm-cnv0.1.x1.2.x
modify-vm-resources0.1.x1.2.x
mta-v60.2.x1.2.x
mta-v70.2.x1.2.x
mtv-migration0.0.x1.2.x
mtv-plan0.0.x1.2.x

Helm index

https://www.parodos.dev/serverless-workflows-config/index.yaml

3.2 - Deploy From Kustomize

Deploy workflows by Kustomize

The workflows can be deployed by Kustomize.

Under each workflow folder, there is a README file with detailed instructions on how to install the workflow separately.