Run Deltalake using the Airflow SDK

In this section, we provide guides and references to use the Deltalake connector.

Configure and schedule Deltalake metadata and profiler workflows from the OpenMetadata UI:

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.

To run the Deltalake ingestion, you will need to install:

pip3 install "openmetadata-ingestion[deltalake]"

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Deltalake.

In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.

The workflow is modeled around the following JSON Schema

This is a sample config for Deltalake:

source:
  type: deltalake
  serviceName: "<service name>"
  serviceConnection:
    config:
      type: DeltaLake
      metastoreConnection:
        # Pick only of the three
        metastoreHostPort: "<metastore host port>"
        # metastoreDb: jdbc:mysql://localhost:3306/demo_hive
        # metastoreFilePath: "<path_to_metastore>/metastore_db"
      appName: MyApp
  sourceConfig:
    config:
      markDeletedTables: true
      includeTables: true
      includeViews: true
      # includeTags: true
      # databaseFilterPattern:
      #   includes:
      #     - database1
      #     - database2
      #   excludes:
      #     - database3
      #     - database4
      # schemaFilterPattern:
      #   includes:
      #     - schema1
      #     - schema2
      #   excludes:
      #     - schema3
      #     - schema4
      # tableFilterPattern:
      #   includes:
      #     - table1
      #     - table2
      #   excludes:
      #     - table3
      #     - table4
      # For DBT, choose one of Cloud, Local, HTTP, S3 or GCS configurations
      # dbtConfigSource:
      # # For cloud
      #   dbtCloudAuthToken: token
      #   dbtCloudAccountId: ID
      # # For Local
      #   dbtCatalogFilePath: path-to-catalog.json
      #   dbtManifestFilePath: path-to-manifest.json
      # # For HTTP
      #   dbtCatalogHttpPath: http://path-to-catalog.json
      #   dbtManifestHttpPath: http://path-to-manifest.json
      # # For S3
      #   dbtSecurityConfig:  # These are modeled after all AWS credentials
      #     awsAccessKeyId: KEY
      #     awsSecretAccessKey: SECRET
      #     awsRegion: us-east-2
      #   dbtPrefixConfig:
      #     dbtBucketName: bucket
      #     dbtObjectPrefix: "dbt/"
      # # For GCS
      #   dbtSecurityConfig:  # These are modeled after all GCS credentials
      #     type: My Type
      #     projectId: project ID
      #     privateKeyId: us-east-2
      #     privateKey: |
      #      -----BEGIN PRIVATE KEY-----
      #      Super secret key
      #      -----END PRIVATE KEY-----
      #     clientEmail: client@mail.com
      #     clientId: 1234
      #     authUri: https://accounts.google.com/o/oauth2/auth (default)
      #     tokenUri: https://oauth2.googleapis.com/token (default)
      #     authProviderX509CertUrl: https://www.googleapis.com/oauth2/v1/certs (default)
      #     clientX509CertUrl: https://cert.url (URI)
      #   dbtPrefixConfig:
      #     dbtBucketName: bucket
      #     dbtObjectPrefix: "dbt/"
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  # loggerLevel: DEBUG  # DEBUG, INFO, WARN or ERROR
  openMetadataServerConfig:
    hostPort: "<OpenMetadata host and port>"
    authProvider: "<OpenMetadata auth provider>"

Source Configuration - Service Connection

  • Metastore Host Port: Enter the Host & Port of Hive Metastore Service to configure the Spark Session. Either of metastoreHostPort, metastoreDb or metastoreFilePath is required.
  • Metastore File Path: Enter the file path to local Metastore in case Spark cluster is running locally. Either of metastoreHostPort, metastoreDb or metastoreFilePath is required.
  • Metastore DB: The JDBC connection to the underlying Hive metastore DB. Either of metastoreHostPort, metastoreDb or metastoreFilePath is required.
  • appName (Optional): Enter the app name of spark session.
  • Connection Arguments (Optional): Key-Value pairs that will be used to pass extra config elements to the Spark Session builder.

We are internally running with pyspark 3.X and delta-lake 2.0.0. This means that we need to consider Spark configuration options for 3.X.

Metastore Host Port

When connecting to an External Metastore passing the parameter Metastore Host Port, we will be preparing a Spark Session with the configuration

.config("hive.metastore.uris", "thrift://{connection.metastoreHostPort}") 

Then, we will be using the catalog functions from the Spark Session to pick up the metadata exposed by the Hive Metastore.

Metastore File Path

If instead we use a local file path that contains the metastore information (e.g., for local testing with the default metastore_db directory), we will set

.config("spark.driver.extraJavaOptions", "-Dderby.system.home={connection.metastoreFilePath}") 

To update the Derby information. More information about this in a great SO thread.

  • You can find all supported configurations here
  • If you need further information regarding the Hive metastore, you can find it here, and in The Internals of Spark SQL book.

Source Configuration - Source Config

The sourceConfig is defined here:

  • markDeletedTables: To flag tables as soft-deleted if they are not present anymore in the source system.
  • includeTables: true or false, to ingest table data. Default is true.
  • includeViews: true or false, to ingest views definitions.
  • databaseFilterPattern, schemaFilterPattern, tableFilternPattern: Note that the they support regex as include or exclude. E.g.,
tableFilterPattern:
  includes:
    - users
    - type_test

Sink Configuration

To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.

Workflow Configuration

The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.

For a simple, local installation using our docker containers, this looks like:

workflowConfig:
  openMetadataServerConfig:
    hostPort: http://localhost:8585/api
    authProvider: no-auth

We support different security providers. You can find their definitions here. You can find the different implementation of the ingestion below.

chevron_rightConfigure SSO in the Ingestion Workflows

Create a Python file in your Airflow DAGs directory with the following contents:

import pathlib
import yaml
from datetime import timedelta
from airflow import DAG

try:
    from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
    from airflow.operators.python_operator import PythonOperator

from metadata.config.common import load_config_file
from metadata.ingestion.api.workflow import Workflow
from airflow.utils.dates import days_ago

default_args = {
    "owner": "user_name",
    "email": ["username@org.com"],
    "email_on_failure": False,
    "retries": 3,
    "retry_delay": timedelta(minutes=5),
    "execution_timeout": timedelta(minutes=60)
}

config = """
<your YAML configuration>
"""

def metadata_ingestion_workflow():
    workflow_config = yaml.safe_load(config)
    workflow = Workflow.create(workflow_config)
    workflow.execute()
    workflow.raise_from_status()
    workflow.print_status()
    workflow.stop()

with DAG(
    "sample_data",
    default_args=default_args,
    description="An example DAG which runs a OpenMetadata ingestion workflow",
    start_date=days_ago(1),
    is_paused_upon_creation=False,
    schedule_interval='*/5 * * * *',
    catchup=False,
) as dag:
    ingest_task = PythonOperator(
        task_id="ingest_using_recipe",
        python_callable=metadata_ingestion_workflow,
    )

Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.

You can learn more about how to ingest DBT models' definitions and their lineage here.

Still have questions?

You can take a look at our Q&A or reach out to us in Slack

Was this page helpful?

editSuggest edits