connectors

No menu items for this category
OpenMetadata Documentation
KinesisFirehose

KinesisFirehose

PROD
Available In
Feature List
Pipelines
Lineage
Owners
Tags
Pipeline Status

In this section, we provide guides and references to use the AWS Kinesis Firehose connector.

Configure and schedule AWS Kinesis Firehose metadata and profiler workflows from the OpenMetadata UI:

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If you want to install it manually in an already existing Airflow host, you can follow this guide.

If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to run the Ingestion Framework in any orchestrator externally.

To extract metadata from AWS Kinesis Firehose, you need to configure AWS credentials with appropriate permissions:

  • AWS Credentials: Valid AWS credentials (Access Key ID and Secret Access Key) or IAM role with permissions to access Kinesis Firehose
  • Permissions Required:
    • firehose:DescribeDeliveryStream - To describe delivery stream details
    • firehose:ListDeliveryStreams - To list all delivery streams

We have support for Python versions 3.9-3.11

To run the AWS Kinesis Firehose ingestion, you will need to install:

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to AWS Kinesis Firehose.

In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.

The workflow is modeled around the following JSON Schema

This is a sample config for AWS Kinesis Firehose:

awsAccessKeyId: AWS Access Key ID is a unique identifier for your AWS account. It is used in combination with the Secret Access Key to authenticate API requests to AWS services. This is an optional field if you are using IAM roles or AWS profiles for authentication.

awsSecretAccessKey: AWS Secret Access Key is a secret key that is used in combination with the Access Key ID to cryptographically sign API requests to AWS services. Keep this value secure and never share it. This is an optional field if you are using IAM roles or AWS profiles for authentication.

awsRegion: AWS Region where your Kinesis Firehose delivery streams are deployed. This is a required field.

awsSessionToken: AWS Session Token is a temporary credential that is required when using temporary security credentials. This field is optional.

endPointURL: Custom endpoint URL for AWS services. Leave this field empty to use the default AWS endpoints.

profileName: The name of an AWS profile configured in your AWS credentials file.

assumeRoleArn: The ARN of an IAM role to assume for accessing Kinesis Firehose resources.

assumeRoleSessionName: A unique identifier for the assumed role session. Default value: OpenMetadataSession

assumeRoleSourceIdentity: The source identity to associate with the assumed role session.

messagingServiceName: The Name of the ingested Kafka Messaging Service associated with this Firehose Pipeline Service upstream source.

pipelineFilterPattern: A regular expression pattern to filter which Kinesis Firehose delivery streams to include or exclude during metadata extraction.

The sourceConfig is defined here:

  • dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.

  • includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.

  • includeUnDeployedPipelines: Set the 'Include UnDeployed Pipelines' toggle to control whether to include un-deployed pipelines as part of metadata ingestion. By default it is set to true

  • markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.

  • pipelineFilterPattern and chartFilterPattern: Note that the pipelineFilterPattern and chartFilterPattern both support regex as include or exclude.

  • includeOwners: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.It supports boolean values either true or false.

  • overrideLineage: Set the 'Override Lineage' toggle to control whether to override the existing lineage. It supports boolean values either true or false.

  • overrideMetadata: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName. It supports boolean values either true or false.

To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.

The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.

Logger Level

You can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.

JWT Token

JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.

You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

SSL Configuration

If you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.

Find more information on how to troubleshoot SSL issues here.

ingestionPipelineFQN

Fully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.

filename.yaml

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:

Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.