In this section, we provide guides and references to use the Nifi connector.
Configure and schedule Nifi metadata and profiler workflows from the OpenMetadata UI:
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.OpenMetadata 0.12 or later
To deploy OpenMetadata, check the Deployment guides.
To run the Nifi ingestion, you will need to install:
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Nifi.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
This is a sample config for Nifi:
hostPort: Pipeline Service Management UI URL nifiConfig: one of 1. Using Basic authentication
- username: Username to connect to Nifi. This user should be able to send request to the Nifi API and access the
Resources endpoint. - password: Password to connect to Nifi. - verifySSL: Whether SSL verification should be perform when authenticating. 2. Using client certificate authentication - certificateAuthorityPath: Path to the certificate authority (CA) file. This is the certificate used to store and issue your digital certificate. This is an optional parameter. If omitted SSL verification will be skipped; this can present some sever security issue. important: This file should be accessible from where the ingestion workflow is running. For example, if you are using OpenMetadata Ingestion Docker container, this file should be in this container. - clientCertificatePath: Path to the certificate client file. important: This file should be accessible from where the ingestion workflow is running. For example, if you are using OpenMetadata Ingestion Docker container, this file should be in this container. - clientkeyPath: Path to the client key file. important: This file should be accessible from where the ingestion workflow is running. For example, if you are using OpenMetadata Ingestion Docker container, this file should be in this container.
sourceConfig is defined here:
dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.
includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
pipelineFilterPattern and chartFilterPattern: Note that the
chartFilterPattern both support regex as include or exclude.
To send the metadata to OpenMetadata, it needs to be specified as
The main property here is the
openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
You can specify the
loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with
DEBUG will give you far more traces for identifying issues.
JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.
You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
If you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set
ignore, or have it as
validate, which will require you to set the
sslConfig.certificatePath with a local path where your ingestion runs that points to the server certificate file.
Find more information on how to troubleshoot SSL issues here.
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.