connectors

No menu items for this category
OpenMetadata Documentation

Run the Snowplow Connector Externally

Snowplow

Snowplow

BETA
Available In
Feature List
Pipelines
Pipeline Status
Lineage
Owners
Tags

In this section, we provide guides and references to use the Snowplow connector.

Configure and schedule Snowplow metadata workflow from the OpenMetadata UI:

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.

If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.

For Snowplow BDP deployments, you'll need:

  • Console URL: The URL of your Snowplow Console (e.g., https://console.snowplowanalytics.com)
  • API Key: An API key with read access to your Snowplow organization
  • Organization ID: Your Snowplow BDP organization identifier

For self-hosted Community Edition deployments, you'll need:

  • Configuration Path: The path to your Snowplow configuration files
  • Iglu Server URL (optional): If you're using an Iglu Server for schema management

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If you want to install it manually in an already existing Airflow host, you can follow this guide.

If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to run the Ingestion Framework in any orchestrator externally.

This is a sample config for Snowplow:

type: Must be Snowplow.

deployment: Choose between BDP (managed) or Community (self-hosted).

consoleUrl: Required for BDP deployment. The URL of your Snowplow Console.

apiKey: Required for BDP deployment. Your Snowplow API key.

organizationId: Required for BDP deployment. Your organization ID.

configPath: Required for Community deployment. Path to configuration files.

cloudProvider: The cloud provider where Snowplow is deployed (AWS, GCP, or Azure).

To send the metadata of only selected pipelines, enter the regex pattern for pipeline names to include or exclude.

The sourceConfig is defined here:

  • dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.

  • includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.

  • includeUnDeployedPipelines: Set the 'Include UnDeployed Pipelines' toggle to control whether to include un-deployed pipelines as part of metadata ingestion. By default it is set to true

  • markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.

  • pipelineFilterPattern and chartFilterPattern: Note that the pipelineFilterPattern and chartFilterPattern both support regex as include or exclude.

  • includeOwners: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.It supports boolean values either true or false.

  • overrideLineage: Set the 'Override Lineage' toggle to control whether to override the existing lineage. It supports boolean values either true or false.

  • overrideMetadata: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName. It supports boolean values either true or false.

To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.

The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.

Logger Level

You can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.

JWT Token

JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.

You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

SSL Configuration

If you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.

Find more information on how to troubleshoot SSL issues here.

ingestionPipelineFQN

Fully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.

filename.yaml
  • You can learn more about how to configure and run the Ingestion Framework here.

After saving the YAML config, run the following command:

The Snowplow connector extracts the following metadata:

  • Pipelines: Each Snowplow pipeline is imported with its configuration
  • Pipeline Components: Collectors, enrichments, and loaders are imported as pipeline tasks
  • Event Schemas: Iglu schemas are imported as table entities showing the structure of events
  • Lineage: Data flow from pipelines to destination tables is captured

The connector can track lineage to the following Snowplow loader destinations:

  • Amazon Redshift
  • Google BigQuery
  • Snowflake
  • Databricks
  • PostgreSQL
  • Amazon S3 (Data Lake)
  • Google Cloud Storage
  • Azure Data Lake Storage

If you encounter connection errors:

  1. For BDP: Verify your API key has the necessary permissions and the organization ID is correct
  2. For Community: Ensure the configuration path exists and is readable

If Iglu schemas are not being imported:

  1. For BDP: Check that your API key has access to the Iglu repositories
  2. For Community: Verify the Iglu server URL is accessible or local schema files are present

For large deployments with many schemas:

  • Use pipeline and schema filter patterns to limit the scope of ingestion
  • Consider running the ingestion during off-peak hours