Run the Snowplow Connector Externally

Snowplow
BETAIn this section, we provide guides and references to use the Snowplow connector.
Configure and schedule Snowplow metadata workflow from the OpenMetadata UI:
How to Run the Connector Externally
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.
Requirements
Snowplow BDP (Business Data Platform)
For Snowplow BDP deployments, you'll need:
- Console URL: The URL of your Snowplow Console (e.g.,
https://console.snowplowanalytics.com
) - API Key: An API key with read access to your Snowplow organization
- Organization ID: Your Snowplow BDP organization identifier
Snowplow Community Edition
For self-hosted Community Edition deployments, you'll need:
- Configuration Path: The path to your Snowplow configuration files
- Iglu Server URL (optional): If you're using an Iglu Server for schema management
Ingestion Deployment
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If you want to install it manually in an already existing Airflow host, you can follow this guide.
If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to run the Ingestion Framework in any orchestrator externally.
Run Connectors from the OpenMetadata UI
Learn how to manage your deployment to run connectors from the UIRun the Connector Externally
Get the YAML to run the ingestion externallyExternal Schedulers
Get more information about running the Ingestion Framework ExternallyMetadata Ingestion
1. Define the YAML Config
This is a sample config for Snowplow:
Source Configuration - Service Connection
type: Must be Snowplow
.
deployment: Choose between BDP
(managed) or Community
(self-hosted).
consoleUrl: Required for BDP deployment. The URL of your Snowplow Console.
apiKey: Required for BDP deployment. Your Snowplow API key.
organizationId: Required for BDP deployment. Your organization ID.
configPath: Required for Community deployment. Path to configuration files.
cloudProvider: The cloud provider where Snowplow is deployed (AWS, GCP, or Azure).
Source Configuration - Pipeline Filter Pattern
To send the metadata of only selected pipelines, enter the regex pattern for pipeline names to include or exclude.
Source Configuration - Source Config
The sourceConfig
is defined here:
dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.
includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
includeUnDeployedPipelines: Set the 'Include UnDeployed Pipelines' toggle to control whether to include un-deployed pipelines as part of metadata ingestion. By default it is set to
true
markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
pipelineFilterPattern and chartFilterPattern: Note that the
pipelineFilterPattern
andchartFilterPattern
both support regex as include or exclude.includeOwners: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.It supports boolean values either
true
orfalse
.overrideLineage: Set the 'Override Lineage' toggle to control whether to override the existing lineage. It supports boolean values either
true
orfalse
.overrideMetadata: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName. It supports boolean values either
true
orfalse
.
Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
Workflow Configuration
The main property here is the openMetadataServerConfig
, where you can define the host and security provider of your OpenMetadata installation.
Logger Level
You can specify the loggerLevel
depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG
will give you far more traces for identifying issues.
JWT Token
JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.
You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service Connection
If set to true
(default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.
If set to false
, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.
Store Service Connection
If set to true
(default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.
If set to false
, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.
SSL Configuration
If you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL
to ignore
, or have it as validate
, which will require you to set the sslConfig.caCertificate
with a local path where your ingestion runs that points to the server certificate file.
Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQN
Fully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
- You can learn more about how to configure and run the Ingestion Framework here.
2. Run the Command
After saving the YAML config, run the following command:
Data Model
The Snowplow connector extracts the following metadata:
- Pipelines: Each Snowplow pipeline is imported with its configuration
- Pipeline Components: Collectors, enrichments, and loaders are imported as pipeline tasks
- Event Schemas: Iglu schemas are imported as table entities showing the structure of events
- Lineage: Data flow from pipelines to destination tables is captured
Supported Destinations
The connector can track lineage to the following Snowplow loader destinations:
- Amazon Redshift
- Google BigQuery
- Snowflake
- Databricks
- PostgreSQL
- Amazon S3 (Data Lake)
- Google Cloud Storage
- Azure Data Lake Storage
Troubleshooting
Connection Errors
If you encounter connection errors:
- For BDP: Verify your API key has the necessary permissions and the organization ID is correct
- For Community: Ensure the configuration path exists and is readable
Missing Schemas
If Iglu schemas are not being imported:
- For BDP: Check that your API key has access to the Iglu repositories
- For Community: Verify the Iglu server URL is accessible or local schema files are present
Performance
For large deployments with many schemas:
- Use pipeline and schema filter patterns to limit the scope of ingestion
- Consider running the ingestion during off-peak hours