deployment

No menu items for this category

This page is about running the Ingestion Framework externally!

There are mainly 2 ways of running the ingestion:

  1. Internally, by managing the workflows from OpenMetadata.
  2. Externally, by using any other tool capable of running Python code.

If you are looking for how to manage the ingestion process from OpenMetadata, you can follow this doc.

Run the ingestion from GCS Composer

This approach has been last tested against:

  • Composer version 2.5.4
  • Airflow version 2.6.3

It also requires the ingestion package to be at least openmetadata-ingestion==1.3.1.0.

The most comfortable way to run the metadata workflows from GCS Composer is directly via a PythonOperator. Note that it will require you to install the packages and plugins directly on the host.

In your environment you will need to install the following packages:

  • openmetadata-ingestion[<plugins>]==x.y.z.
  • sqlalchemy==1.4.27: This is needed to align OpenMetadata version with the Composer internal requirements.

Where x.y.z is the version of the OpenMetadata ingestion package. Note that the version needs to match the server version. If we are using the server at 1.1.0, then the ingestion package needs to also be 1.1.0.

The plugin parameter is a list of the sources that we want to ingest. An example would look like this openmetadata-ingestion[mysql,snowflake,s3]==1.1.0.

Note that this DAG is a usual connector DAG, just using the Airflow service with the Backend connection.

As an example of a DAG pushing data to OpenMetadata under Google SSO, we could have:

We have different classes for different types of workflows. The logic is always the same, but you will need to change your import path. The rest of the method calls will remain the same.

For example, for the Metadata workflow we'll use:

The classes for each workflow type are:

  • Metadata: from metadata.workflow.metadata import MetadataWorkflow
  • Lineage: from metadata.workflow.metadata import MetadataWorkflow (same as metadata)
  • Usage: from metadata.workflow.usage import UsageWorkflow
  • dbt: from metadata.workflow.metadata import MetadataWorkflow
  • Profiler: from metadata.workflow.profiler import ProfilerWorkflow
  • Data Quality: from metadata.workflow.data_quality import TestSuiteWorkflow
  • Data Insights: from metadata.workflow.data_insight import DataInsightWorkflow
  • Elasticsearch Reindex: from metadata.workflow.metadata import MetadataWorkflow (same as metadata)

In this second approach we won't need to install absolutely anything to the GCS Composer environment. Instead, we will rely on the KubernetesPodOperator to use the underlying k8s cluster of Composer.

Then, the code won't directly run using the hosts' environment, but rather inside a container that we created with only the openmetadata-ingestion package.

Note: This approach only has the openmetadata/ingestion-base ready from version 0.12.1 or higher!

Some remarks on this example code:

You can name the task as you want (task_id and name). The important points here are the cmds, this should not be changed, and the env_vars. The main.py script that gets shipped within the image will load the env vars as they are shown, so only modify the content of the config YAML, but not this dictionary.

Note that the example uses the image openmetadata/ingestion-base:0.13.2. Update that accordingly for higher version once they are released. Also, the image version should be aligned with your OpenMetadata server version to avoid incompatibilities.

You can find more information about the KubernetesPodOperator and how to tune its configurations here.

Note that depending on the kind of workflow you will be deploying, the YAML configuration will need to updated following the official OpenMetadata docs, and the value of the pipelineType configuration will need to hold one of the following values:

  • metadata
  • usage
  • lineage
  • profiler
  • TestSuite

Which are based on the PipelineType JSON Schema definitions