Run Athena using the metadata CLI

In this section, we provide guides and references to use the Athena connector.

Configure and schedule Athena metadata and profiler workflows from the OpenMetadata UI:

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.

To run the Athena ingestion, you will need to install:

pip3 install "openmetadata-ingestion[athena]"

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Athena.

In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.

The workflow is modeled around the following JSON Schema

This is a sample config for Athena:

source:
  type: athena
  serviceName: local_athena
  serviceConnection:
    config:
      type: Athena
      awsConfig:
        awsAccessKeyId: KEY
        awsSecretAccessKey: SECRET
        awsRegion: us-east-2
        # endPointURL: https://athena.us-east-2.amazonaws.com/
        # awsSessionToken: TOKEN
      s3StagingDir: s3 directory for datasource
      workgroup: workgroup name
  sourceConfig:
    config:
      markDeletedTables: true
      includeTables: true
      includeViews: true
      # includeTags: true
      # databaseFilterPattern:
      #   includes:
      #     - database1
      #     - database2
      #   excludes:
      #     - database3
      #     - database4
      # schemaFilterPattern:
      #   includes:
      #     - schema1
      #     - schema2
      #   excludes:
      #     - schema3
      #     - schema4
      # tableFilterPattern:
      #   includes:
      #     - table1
      #     - table2
      #   excludes:
      #     - table3
      #     - table4
      # For DBT, choose one of Cloud, Local, HTTP, S3 or GCS configurations
      # dbtConfigSource:
      # # For cloud
      #   dbtCloudAuthToken: token
      #   dbtCloudAccountId: ID
      # # For Local
      #   dbtCatalogFilePath: path-to-catalog.json
      #   dbtManifestFilePath: path-to-manifest.json
      # # For HTTP
      #   dbtCatalogHttpPath: http://path-to-catalog.json
      #   dbtManifestHttpPath: http://path-to-manifest.json
      # # For S3
      #   dbtSecurityConfig:  # These are modeled after all AWS credentials
      #     awsAccessKeyId: KEY
      #     awsSecretAccessKey: SECRET
      #     awsRegion: us-east-2
      #   dbtPrefixConfig:
      #     dbtBucketName: bucket
      #     dbtObjectPrefix: "dbt/"
      # # For GCS
      #   dbtSecurityConfig:  # These are modeled after all GCS credentials
      #     type: My Type
      #     projectId: project ID
      #     privateKeyId: us-east-2
      #     privateKey: |
      #      -----BEGIN PRIVATE KEY-----
      #      Super secret key
      #      -----END PRIVATE KEY-----
      #     clientEmail: client@mail.com
      #     clientId: 1234
      #     authUri: https://accounts.google.com/o/oauth2/auth (default)
      #     tokenUri: https://oauth2.googleapis.com/token (default)
      #     authProviderX509CertUrl: https://www.googleapis.com/oauth2/v1/certs (default)
      #     clientX509CertUrl: https://cert.url (URI)
      #   dbtPrefixConfig:
      #     dbtBucketName: bucket
      #     dbtObjectPrefix: "dbt/"
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  # loggerLevel: DEBUG  # DEBUG, INFO, WARN or ERROR
  openMetadataServerConfig:
    hostPort: <OpenMetadata host and port>
    authProvider: <OpenMetadata auth provider>

Source Configuration - Service Connection

  • awsAccessKeyId: Enter your secure access key ID for your Athena connection. The specified key ID should be authorized to read all databases you want to include in the metadata ingestion workflow.
  • awsSecretAccessKey: Enter the Secret Access Key (the passcode key pair to the key ID from above).
  • awsRegion: Enter the location of the amazon cluster that your data and account are associated with.
  • awsSessionToken: The AWS session token is an optional parameter. If you want, enter the details of your temporary session token.
  • endPointURL: Your Athena connector will automatically determine the AWS Athena endpoint URL based on the region. You may override this behavior by entering a value to the endpoint URL.
  • s3StagingDir: The S3 staging directory is an optional parameter. Enter a staging dirrectory to override the default staging directory for AWS Athena.
  • workgroup: The Athena workgroup is an optional parameter. If you wish to have your Athena connection related to an existing AWS workgroup add your workgroup name here.
  • Connection Options (Optional): Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
  • Connection Arguments (Optional): Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
    • In case you are using Single-Sign-On (SSO) for authentication, add the authenticator details in the Connection Arguments as a Key-Value pair as follows: "authenticator" : "sso_login_url"
    • In case you authenticate with SSO using an external browser popup, then add the authenticator details in the Connection Arguments as a Key-Value pair as follows: "authenticator" : "externalbrowser"

Source Configuration - Source Config

The sourceConfig is defined here:

  • markDeletedTables: To flag tables as soft-deleted if they are not present anymore in the source system.
  • includeTables: true or false, to ingest table data. Default is true.
  • includeViews: true or false, to ingest views definitions.
  • databaseFilterPattern, schemaFilterPattern, tableFilternPattern: Note that the they support regex as include or exclude. E.g.,
tableFilterPattern:
  includes:
    - users
    - type_test

Sink Configuration

To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.

Workflow Configuration

The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.

For a simple, local installation using our docker containers, this looks like:

workflowConfig:
  openMetadataServerConfig:
    hostPort: http://localhost:8585/api
    authProvider: no-auth

We support different security providers. You can find their definitions here. You can find the different implementation of the ingestion below.

chevron_rightConfigure SSO in the Ingestion Workflows

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:

metadata ingest -c <path-to-yaml>

Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.

The Data Profiler workflow will be using the orm-profiler processor. While the serviceConnection will still be the same to reach the source system, the sourceConfig will be updated from previous configurations.

This is a sample config for the profiler:

source:
  type: athena
  serviceName: <service name>
  serviceConnection:
    config:
      type: Athena
      awsConfig:
        awsAccessKeyId: KEY
        awsSecretAccessKey: SECRET
        awsRegion: us-east-2
        # endPointURL: https://athena.us-east-2.amazonaws.com/
        # awsSessionToken: TOKEN
      s3StagingDir: s3 directory for datasource
      workgroup: workgroup name
  sourceConfig:
    config:
      type: Profiler
      # generateSampleData: true
      # profileSample: 85
      # threadCount: 5 (default)
      # databaseFilterPattern:
      #   includes:
      #     - database1
      #     - database2
      #   excludes:
      #     - database3
      #     - database4
      # schemaFilterPattern:
      #   includes:
      #     - schema1
      #     - schema2
      #   excludes:
      #     - schema3
      #     - schema4
      # tableFilterPattern:
      #   includes:
      #     - table1
      #     - table2
      #   excludes:
      #     - table3
      #     - table4
processor:
  type: orm-profiler
  config: {}  # Remove braces if adding properties
  # tableConfig:
  #   - fullyQualifiedName: <table fqn>
  #     profileSample: <number between 0 and 99>
  #     columnConfig:
  #       profileQuery: <query to use for sampling data for the profiler>
  #       excludeColumns:
  #         - <column name>
  #       includeColumns:
  #         - columnName: <column name>
  #         - metrics:
  #           - MEAN
  #           - MEDIAN
  #           - ...
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  # loggerLevel: DEBUG  # DEBUG, INFO, WARN or ERROR
  openMetadataServerConfig:
    hostPort: <OpenMetadata host and port>
    authProvider: <OpenMetadata auth provider>

Source Configuration

  • You can find all the definitions and types for the serviceConnection here.
  • The sourceConfig is defined here.

Note that the filter patterns support regex as includes or excludes. E.g.,

tableFilterPattern:
  includes:
  - *users$

Processor

Choose the orm-profiler. Its config can also be updated to define tests from the YAML itself instead of the UI:

processor:
  type: orm-profiler
  config:
    tableConfig:
      - fullyQualifiedName: <table fqn>
        profileSample: <number between 0 and 99>
        columnConfig:
          partitionConfig:
            partitionField: <field to use as a partition field>
            partitionQueryDuration: <for date/datetime partitioning based set the offset from today>
            partitionValues: <values to uses as a predicate for the query>
          profileQuery: <query to use for sampling data for the profiler>
          excludeColumns:
            - <column name>
          includeColumns:
            - columnName: <column name>
            - metrics:
                - MEAN
                - MEDIAN
                - ...

tableConfig allows you to set up some configuration at the table level. All the properties are optional. metrics should be one of the metrics listed here

Workflow Configuration

The same as the metadata ingestion.

After saving the YAML config, we will run the command the same way we did for the metadata ingestion:

metadata profile -c <path-to-yaml>

Note how instead of running ingest, we are using the profile command to select the Profiler workflow.

You can learn more about how to ingest DBT models' definitions and their lineage here.

Still have questions?

You can take a look at our Q&A or reach out to us in Slack

Was this page helpful?

editSuggest edits