Run BigQuery using the metadata CLI
In this section, we provide guides and references to use the BigQuery connector.
Configure and schedule BigQuery metadata and profiler workflows from the OpenMetadata UI:
Requirements
OpenMetadata 0.10 or later To deploy OpenMetadata, check the Deployment guides.
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
GCP Permissions
To execute metadata extraction and usage workflow successfully the user or the service account should have enough access to fetch required data. Following table describes the minimum required permissions
# | GCP Permission | GCP Role | Required For |
---|---|---|---|
1 | bigquery.datasets.get | BigQuery Data Viewer | Metadata Ingestion |
2 | bigquery.tables.get | BigQuery Data Viewer | Metadata Ingestion |
3 | bigquery.tables.getData | BigQuery Data Viewer | Metadata Ingestion |
4 | bigquery.tables.list | BigQuery Data Viewer | Metadata Ingestion |
5 | resourcemanager.projects.get | BigQuery Data Viewer | Metadata Ingestion |
6 | bigquery.jobs.create | BigQuery Job User | Metadata Ingestion |
7 | bigquery.jobs.listAll | BigQuery Job User | Metadata Ingestion |
8 | datacatalog.taxonomies.get | BigQuery Policy Admin | Fetch Policy Tags |
9 | datacatalog.taxonomies.list | BigQuery Policy Admin | Fetch Policy Tags |
10 | bigquery.readsessions.create | BigQuery Admin | Bigquery Usage Workflow |
11 | bigquery.readsessions.getData | BigQuery Admin | Bigquery Usage Workflow |
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to BigQuery.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema.
1. Define the YAML Config
This is a sample config for BigQuery:
Source Configuration - Service Connection
- hostPort: This is the BigQuery APIs URL.
- username: (Optional) Specify the User to connect to BigQuery. It should have enough privileges to read all the metadata.
- projectID: (Optional) The BigQuery Project ID is required only if the credentials path is being used instead of values.
- credentials: We support two ways of authenticating to BigQuery inside gcsConfig
- Passing the raw credential values provided by BigQuery. This requires us to provide the following information, all provided by BigQuery:
- type, e.g.,
service_account
- projectId
- privateKey
- privateKeyId
- clientEmail
- clientId
- authUri, https://accounts.google.com/o/oauth2/auth by defaul
- tokenUri, https://oauth2.googleapis.com/token by default
- authProviderX509CertUrl, https://www.googleapis.com/oauth2/v1/certs by default
- clientX509CertUrl
- type, e.g.,
- Passing a local file path that contains the credentials:
- gcsCredentialsPath
- Passing the raw credential values provided by BigQuery. This requires us to provide the following information, all provided by BigQuery:
If you prefer to pass the credentials file, you can do so as follows:
credentials:
gcsConfig:
gcsCredentialsPath: <path to file>
- Enable Policy Tag Import (Optional): Mark as 'True' to enable importing policy tags from BigQuery to OpenMetadata.
- Tag Category Name (Optional): If the Tag import is enabled, the name of the Tag Category will be created at OpenMetadata.
- Database (Optional): The database of the data source is an optional parameter, if you would like to restrict the metadata reading to a single database. If left blank, OpenMetadata ingestion attempts to scan all the databases.
- Connection Options (Optional): Enter the details for any additional connection options that can be sent to BigQuery during the connection. These details must be added as Key-Value pairs.
- Connection Arguments (Optional): Enter the details for any additional connection arguments such as security or protocol configs that can be sent to BigQuery during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the
authenticator
details in the Connection Arguments as a Key-Value pair as follows:"authenticator" : "sso_login_url"
- In case you authenticate with SSO using an external browser popup, then add the
authenticator
details in the Connection Arguments as a Key-Value pair as follows:"authenticator" : "externalbrowser"
- In case you are using Single-Sign-On (SSO) for authentication, add the
Source Configuration - Source Config
The sourceConfig
is defined here:
markDeletedTables
: To flag tables as soft-deleted if they are not present anymore in the source system.includeTables
: true or false, to ingest table data. Default is true.includeViews
: true or false, to ingest views definitions.schemaFilterPattern
andtableFilternPattern
: Note that the schemaFilterPattern and tableFilterPattern both support regex as include or exclude. E.g.,tableFilterPattern: includes: - users - type_test
Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
Workflow Configuration
The main property here is the openMetadataServerConfig
, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
workflowConfig:
openMetadataServerConfig:
hostPort: http://localhost:8585/api
authProvider: no-auth
We support different security providers. You can find their definitions here. An example of an Auth0 configuration would be the following:
workflowConfig:
openMetadataServerConfig:
hostPort: http://localhost:8585/api
authProvider: auth0
securityConfig:
clientId: <client ID>
secretKey: <secret key>
domain: <domain>
2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.
Query Usage and Lineage Ingestion
To ingest the Query Usage and Lineage information, the serviceConnection configuration will remain the same. However, the sourceConfig
is now modeled after this JSON Schema.
1. Define the YAML Config
This is a sample config for BigQuery Usage:
Source Configuration - Service Connection
You can find all the definitions and types for the serviceConnection here. They are the same as metadata ingestion.
Source Configuration - Source Config
The sourceConfig
is defined here.
queryLogDuration
: Configuration to tune how far we want to look back in query logs to process usage data.resultLimit
: Configuration to set the limit for query logs
Processor, Stage and Bulk Sink
To specify where the staging files will be located.
Workflow Configuration
The same as the metadata ingestion.
2. Run with the CLI
There is an extra requirement to run the Usage pipelines. You will need to install:
pip3 install --upgrade 'openmetadata-ingestion[bigquery-usage]'
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata ingest -c <path-to-yaml>
Data Profiler and Quality Tests
The Data Profiler workflow will be using the orm-profiler
processor. While the serviceConnection
will still be the same to reach the source system, the sourceConfig
will be updated from previous configurations.
1. Define the YAML Config
This is a sample config for the profiler:
Source Configuration
- You can find all the definitions and types for the
serviceConnection
here. - The
sourceConfig
is defined here.
Note that the fqnFilterPattern
supports regex as includes or excludes. E.g.,
fqnFilterPattern:
includes:
- service.database.schema.*
Processor
Choose the orm-profiler
. Its config can also be updated to define tests from the YAML itself instead of the UI:processor:
type: orm-profiler
config:
test_suite:
name: <Test Suite name>
tests:
- table: <Table FQN>
table_tests:
- testCase:
config:
value: 100
tableTestType: tableRowCountToEqual
column_tests:
- columnName: <Column Name>
testCase:
config:
minValue: 0
maxValue: 99
columnTestType: columnValuesToBeBetween
tests
is a list of test definitions that will be applied to table, informed by its FQN. For each table, one can then define a list of table_tests
and column_tests
. Review the supported tests and their definitions to learn how to configure the different cases here. // TODO: Link to tests
Workflow Configuration
The same as the metadata ingestion.
2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata profile -c <path-to-yaml>
Note how instead of running ingest
, we are using the profile
command to select the Profiler workflow.
DBT Integration
You can learn more about how to ingest DBT models' definitions and their lineage here.