Run Snowflake using the metadata CLI
Feature | Status |
---|---|
Stage | PROD |
Metadata | |
Query Usage | |
Data Profiler | |
Data Quality | |
Lineage | |
DBT | |
Supported Versions | -- |
Feature | Status |
---|---|
Lineage | |
Table-level | |
Column-level |
In this section, we provide guides and references to use the Snowflake connector.
Configure and schedule Snowflake metadata and profiler workflows from the OpenMetadata UI:
Requirements
OpenMetadata 0.12 or laterTo deploy OpenMetadata, check the Deployment guides.
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
Python Requirements
To run the Snowflake ingestion, you will need to install:
If you want to run the Usage Connector, you'll also need to install:
To ingest basic metadata snowflake user must have the following priviledges:
USAGE
Privilege on WarehouseUSAGE
Privilege on DatabaseUSAGE
Privilege on SchemaSELECT
Privilege on Tables
While running the usage workflow, Openmetadata fetches the query logs by querying snowflake.account_usage.query_history
table. For this the snowflake user should be granted the ACCOUNTADMIN
role or a role granted IMPORTED PRIVILEGES on the database SNOWFLAKE
.
If ingesting tags, the user should also have permissions to query snowflake.account_usage.tag_references
.For this the snowflake user should be granted the ACCOUNTADMIN
role or a role granted IMPORTED PRIVILEGES on the database
You can find more information about the account_usage
schema here.
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Snowflake.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Snowflake:
Source Configuration - Service Connection
username: Specify the User to connect to Snowflake. It should have enough privileges to read all the metadata.
password: Password to connect to Snowflake.
warehouse: Snowflake warehouse is required for executing queries to fetch the metadata. Enter the name of warehouse against which you would like to execute these queries.
account: Snowflake account identifier uniquely identifies a Snowflake account within your organization, as well as throughout the global network of Snowflake-supported cloud platforms and cloud regions. If the Snowflake URL is https://xyz1234.us-east-1.gcp.snowflakecomputing.com
, then the account is xyz1234.us-east-1.gcp
.
database: The database of the data source is an optional parameter, if you would like to restrict the metadata reading to a single database. If left blank, OpenMetadata ingestion attempts to scan all the databases.
includeTempTables: Optional configuration for ingestion of TRANSIENT and TEMPORARY tables, By default, it will skip the TRANSIENT and TEMPORARY tables.
privateKey: If you have configured the key pair authentication for the given user you will have to pass the private key associated with the user in this field. You can checkout this doc to get more details about key-pair authentication.
- The multi-line key needs to be converted to one line with
\n
for line endings i.e.-----BEGIN ENCRYPTED PRIVATE KEY-----\nMII...\n...\n-----END ENCRYPTED PRIVATE KEY-----
snowflakePrivatekeyPassphrase: If you have configured the encrypted key pair authentication for the given user you will have to pass the paraphrase associated with the private key in this field. You can checkout this doc to get more details about key-pair authentication.
role: You can specify the role of user that you would like to ingest with, if no role is specified the default roles assigned to user will be selected.
Source Configuration - Source Config
The sourceConfig
is defined here:
markDeletedTables: To flag tables as soft-deleted if they are not present anymore in the source system.
includeTables: true or false, to ingest table data. Default is true.
includeViews: true or false, to ingest views definitions.
databaseFilterPattern, schemaFilterPattern, tableFilternPattern: Note that the filter supports regex as include or exclude. You can find examples here
Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
Workflow Configuration
The main property here is the openMetadataServerConfig
, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
Advanced Configuration
Connection Options (Optional): Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
Connection Arguments (Optional): Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the
authenticator
details in the Connection Arguments as a Key-Value pair as follows:"authenticator" : "sso_login_url"
- In case you authenticate with SSO using an external browser popup, then add the
authenticator
details in the Connection Arguments as a Key-Value pair as follows:"authenticator" : "externalbrowser"
Workflow Configs for Security Provider
We support different security providers. You can find their definitions here.
Openmetadata JWT Auth
- JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.
- You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration. If you need information on configuring the ingestion with other security providers in your bots, you can follow this doc link.
2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.
Query Usage
The Query Usage workflow will be using the query-parser
processor.
After running a Metadata Ingestion workflow, we can run Query Usage workflow. While the serviceName
will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection
details from the server.
1. Define the YAML Config
This is a sample config for Snowflake Usage:
Source Configuration - Source Config
You can find all the definitions and types for the sourceConfig
here.
queryLogDuration: Configuration to tune how far we want to look back in query logs to process usage data.
stageFileLocation: Temporary file name to store the query logs before processing. Absolute file path required.
resultLimit: Configuration to set the limit for query logs
queryLogFilePath: Configuration to set the file path for query logs
2. Run with the CLI
There is an extra requirement to run the Usage pipelines. You will need to install:
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
Data Profiler
The Data Profiler workflow will be using the orm-profiler
processor.
After running a Metadata Ingestion workflow, we can run Data Profiler workflow. While the serviceName
will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection
details from the server.
1. Define the YAML Config
This is a sample config for the profiler:
Source Configuration - Source Config
You can find all the definitions and types for the sourceConfig
here.
generateSampleData: Option to turn on/off generating sample data.
profileSample: Percentage of data or no. of rows we want to execute the profiler and tests on.
threadCount: Number of threads to use during metric computations.
processPiiSensitive: Optional configuration to automatically tag columns that might contain sensitive information.
confidence: Set the Confidence value for which you want the column to be marked
timeoutSeconds: Profiler Timeout in Seconds
databaseFilterPattern: Regex to only fetch databases that matches the pattern.
schemaFilterPattern: Regex to only fetch tables or databases that matches the pattern.
tableFilterPattern: Regex to only fetch tables or databases that matches the pattern.
Processor Configuration
Choose the orm-profiler
. Its config can also be updated to define tests from the YAML itself instead of the UI:
tableConfig: tableConfig
allows you to set up some configuration at the table level.
- You can learn more about how to configure and run the Profiler Workflow to extract Profiler data and execute the Data Quality from here
2. Prepare the Profiler DAG
Here, we follow a similar approach as with the metadata and usage pipelines, although we will use a different Workflow class:
Import necessary modules
The ProfilerWorkflow
class that is being imported is a part of a metadata orm_profiler framework, which defines a process of extracting Profiler data.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
Default arguments for all tasks in the Airflow DAG.
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
- config: Specifies config for the profiler as we prepare above.
- metadata_ingestion_workflow(): This code defines a function
metadata_ingestion_workflow()
that loads a YAML configuration, creates aProfilerWorkflow
object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
- DAG: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit here.
Lineage
You can learn more about how to ingest lineage here.
dbt Integration
You can learn more about how to ingest dbt models' definitions and their lineage here.