GKE on Google Cloud Platform Deployment

OpenMetadata supports the Installation and Running of Application on Google Kubernetes Engine through Helm Charts. However, there are some additional configurations which needs to be done as prerequisites for the same.


Google Kubernetes Engine (GKE) Auto Pilot Mode is not compatible with one of OpenMetadata Dependencies - ElasticSearch. The reason being that ElasticSearch Pods require Elevated permissions to run initContainers for changing configurations which is not allowed by GKE AutoPilot PodSecurityPolicy.


All the code snippets in this section assume the default namespace for kubernetes.

OpenMetadata helm chart depends on Airflow and Airflow expects a presistent disk that support ReadWriteMany (the volume can be mounted as read-write by many nodes).

The workaround is to create nfs-server disk on Google Kubernetes Engine and use that as the presistent claim and delpoy OpenMetadata by implementing the following steps in order.

Run the below command to create a gcloud compute zonal disk. For more information on Google Cloud Disk Options, please visit here.

gcloud compute disks create --size=100GB --zone=<zone_id> nfs-disk
chevron_rightCode Samples

Update <NFS_SERVER_CLUSTER_IP> with the NFS Service Cluster IP Address for below code snippets. You can get the clusterIP using the following command

kubectl get service nfs-server -o jsonpath='{.spec.clusterIP}'
chevron_rightCode Samples for PV and PVC for Airflow DAGs
chevron_rightCode Samples for PV and PVC for Airflow Logs

Since airflow pods run as non root users, they would not have write access on the nfs server volumes. In order to fix the permission here, spin up a pod with persistent volumes attached and run it once.

# permissions_pod.yml
apiVersion: v1
kind: Pod
  creationTimestamp: null
    run: my-permission-pod
  name: my-permission-pod
  - image: nginx
    name: my-permission-pod
    - name: airflow-dags
      mountPath: /airflow-dags
    - name: airflow-logs
      mountPath: /airflow-logs
  - name: airflow-logs
      claimName: openmetadata-dependencies-logs
  - name: airflow-dags
      claimName: openmetadata-dependencies-dags
  dnsPolicy: ClusterFirst
  restartPolicy: Always


Airflow runs the pods with linux user name as airflow and linux user id as 50000.

Run the below command to create the pod and fix the permissions

kubectl create -f permissions_pod.yml

Once the permissions pod is up and running, execute the below commands within the container.

kubectl exec --tty my-permission-pod --container my-permission-pod -- chown -R 50000 /airflow-dags /airflow-logs
# If needed
kubectl exec --tty my-permission-pod --container my-permission-pod -- chmod -R a+rwx /airflow-dags

Override openmetadata dependencies airflow helm values to bind the nfs persistent volumes for DAGs and logs.

# values-dependencies.yml
      - mountPath: /airflow-logs
        name: nfs-airflow-logs
      - mountPath: /airflow-dags/dags
        name: nfs-airflow-dags
      - name: nfs-airflow-logs
          claimName: openmetadata-dependencies-logs
      - name: nfs-airflow-dags
          claimName: openmetadata-dependencies-dags
    path: /airflow-dags/dags
      enabled: false
    path: /airflow-logs
      enabled: false

For more information on airflow helm chart values, please refer to airflow-helm.

Follow OpenMetadata Kubernetes Deployment to install and deploy helm charts with nfs volumes. When deploying openmeteadata dependencies helm chart, use the below command -

helm install openmetadata-dependencies open-metadata/openmetadata-dependencies --values values-dependencies.yaml

Still have questions?

You can take a look at our Q&A or reach out to us in Slack

Was this page helpful?

editSuggest edits