In this section, we provide guides and references to use the DeltaLake connector.
Configure and schedule DeltaLake metadata and profiler workflows from the OpenMetadata UI:
If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to connect using Airflow SDK or with the CLI.
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
1. Visit the Services Page
The first step is ingesting the metadata from your sources. Under Settings, you will find a Services link an external source system to OpenMetadata. Once a service is created, it can be used to configure metadata, usage, and profiler workflows.
To visit the Services page, select Services from the Settings menu.
Find Services under the Settings menu
2. Create a New Service
Click on the Add New Service button to start the Service creation.
Add a new Service from the Services page
3. Select the Service Type
Select DeltaLake as the service type and click Next.
Select your service from the list
4. Name and Describe your Service
Provide a name and description for your service as illustrated below.
OpenMetadata uniquely identifies services by their Service Name. Provide a name that distinguishes your deployment from other services, including the other DeltaLake services that you might be ingesting metadata from.
Provide a Name and description for your Service
5. Configure the Service Connection
In this step, we will configure the connection settings required for this connector. Please follow the instructions below to ensure that you've configured the connector to read from your DeltaLake service as desired.
Configure the service connection by filling the form
Once the credentials have been added, click on Test Connection and Save the changes.
Test the connection and save the Service
- Metastore Host Port: Enter the Host & Port of Hive Metastore to configure the Spark Session. Either of
- Metastore File Path: Enter the file path to local Metastore in case Spark cluster is running locally. Either of
- appName (Optional): Enter the app name of spark session.
- Connection Arguments (Optional): Key-Value pairs that will be used to pass extra
configelements to the Spark Session builder.
We are internally running with
pyspark 3.X and
delta-lake 2.0.0. This means that we need to consider Spark configuration options for 3.X.
- You can find all supported configurations here
- If you need further information regarding the Hive metastore, you can find it here, and in The Internals of Spark SQL book.
6. Schedule the Ingestion and Deploy
Scheduling can be set up at an hourly, daily, or weekly cadence. The timezone is in UTC. Select a Start Date to schedule for ingestion. It is optional to add an End Date.
Review your configuration settings. If they match what you intended, click Deploy to create the service and schedule metadata ingestion.
If something doesn't look right, click the Back button to return to the appropriate step and change the settings as needed.
Schedule the Ingestion Pipeline and Deploy
After configuring the workflow, you can click on Deploy to create the pipeline.
7. View the Ingestion Pipeline
Once the workflow has been successfully deployed, you can view the Ingestion Pipeline running from the Service Page.
View the Ingestion Pipeline from the Service Page
8. Workflow Deployment Error
If there were any errors during the workflow deployment process, the Ingestion Pipeline Entity will still be created, but no workflow will be present in the Ingestion container.
You can then edit the Ingestion Pipeline and Deploy it again.
Edit and Deploy the Ingestion Pipeline
From the Connection tab, you can also Edit the Service if needed.