Run Airflow using the metadata CLI
In this section, we provide guides and references to use the Airbyte connector.
Configure and schedule Airbyte metadata and profiler workflows from the OpenMetadata UI:
Requirements
OpenMetadata 0.12 or laterTo deploy OpenMetadata, check the Deployment guides.
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
Python Requirements
To run the Airflow ingestion, you will need to install:
Note that this installs the same Airflow version that we ship in the Ingestion Container, which is Airflow 2.3.3
from Release 0.12
.
The ingestion using Airflow version 2.3.3 as a source package has been tested against Airflow 2.3.3 and Airflow 2.2.5.
Note: we only support officially supported Airflow versions. You can check the version list here.
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Airbyte.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Airbyte:
Source Configuration - Service Connection
connection: Airflow metadata database connection. See these docs for supported backends.
In terms of connection
we support the following selections:
backend
: Should not be used from the UI. This is only applicable when ingesting Airflow metadata locally by running the ingestion from a DAG. It will use the current Airflow SQLAlchemy connection to extract the data.MySQL
,Postgres
,MSSQL
andSQLite
: Pass the required credentials to reach out each of these services. We will create a connection to the pointed database and read Airflow data from there.
hostPort: URL to the Airflow instance.
numberOfStatus: Number of status we want to look back to in every ingestion (e.g., Past executions from a DAG).
connection: Airflow metadata database connection. See these docs for supported backends.
In terms of connection
we support the following selections:
backend
: Should not be used from the UI. This is only applicable when ingesting Airflow metadata locally by running the ingestion from a DAG. It will use the current Airflow SQLAlchemy connection to extract the data.MySQL
,Postgres
,MSSQL
andSQLite
: Pass the required credentials to reach out each of these services. We will create a connection to the pointed database and read Airflow data from there.
Source Configuration - Source Config
The sourceConfig
is defined here:
dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.
includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
pipelineFilterPattern and chartFilterPattern: Note that the pipelineFilterPattern
and chartFilterPattern
both support regex as include or exclude.
Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
Workflow Configuration
The main property here is the openMetadataServerConfig
, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
Workflow Configs for Security Provider
We support different security providers. You can find their definitions here.
Openmetadata JWT Auth
- JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.
- You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration. If you need information on configuring the ingestion with other security providers in your bots, you can follow this doc link.
2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.