Athena

FeatureStatus
StagePROD
Metadata
Query Usage
Data Profiler
Data Quality
Lineage
DBT
Supported Versions--
FeatureStatus
Lineage
Table-level
Column-level

In this section, we provide guides and references to use the Athena connector.

Configure and schedule Athena metadata and profiler workflows from the OpenMetadata UI:

If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to connect using Airflow SDK or with the CLI.

OpenMetadata 0.12 or later

To deploy OpenMetadata, check the Deployment guides.

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.

The Athena connector ingests metadata through JDBC connections.

According to AWS's official documentation:

If you are using the JDBC or ODBC driver, ensure that the IAM permissions policy includes all of the actions listed in AWS managed policy: AWSQuicksightAthenaAccess.

This policy groups the following permissions:

  • athena – Allows the principal to run queries on Athena resources.
  • glue – Allows principals access to AWS Glue databases, tables, and partitions. This is required so that the principal can use the AWS Glue Data Catalog with Athena.
  • s3 – Allows the principal to write and read query results from Amazon S3.
  • lakeformation – Allows principals to request temporary credentials to access data in a data lake location that is registered with Lake Formation.

And is defined as:

You can find further information on the Athena connector in the docs.

The first step is to ingesting the metadata from your sources. To do that create a service connection first. Once a service is created, it can be used to configure metadata, usage, and profiler workflows.

To visit the Database Services page, click on 'Settings' in the top navigation bar and select 'Databases' from left panel.

Visit Services Page

Find Databases option on left panel of the settings page

Click on the 'Add New Service' button to start the Service creation.

Create a new service

Add a new Service from the Database Services page

Select Athena as the service type and click Next.

Select Service

Select your service from the list

Provide a name and description for your service as illustrated below.

OpenMetadata uniquely identifies services by their Service Name. Provide a name that distinguishes your deployment from other services, including the other {connector} services that you might be ingesting metadata from.

Add New Service

Provide a Name and description for your Service

In this step, we will configure the connection settings required for this connector. Please follow the instructions below to ensure that you've configured the connector to read from your athena service as desired.

Configure service connection

Configure the service connection by filling the form

  • AWS Access Key ID & AWS Secret Access Key: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests (docs).

Access keys consist of two parts: An access key ID (for example, AKIAIOSFODNN7EXAMPLE), and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY).

You must use both the access key ID and secret access key together to authenticate your requests.

You can find further information on how to manage your access keys here.

  • AWS Region: Each AWS Region is a separate geographic area in which AWS clusters data centers (docs).

As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.

Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.

You can find further information about configuring your credentials here.

  • AWS Session Token (optional): If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID and AWS Secrets Access Key. Also, these will include an AWS Session Token.

You can find more information on Using temporary credentials with AWS resources.

  • Endpoint URL (optional): To connect programmatically to an AWS service, you use an endpoint. An endpoint is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.

Find more information on AWS service endpoints.

  • Profile Name: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Multiple named profiles can be stored in the config and credentials files.

You can inform this field if you'd like to use a profile other than default.

Find here more information about Named profiles for the AWS CLI.

  • Assume Role Arn: Typically, you use AssumeRole within your account or for cross-account access. In this field you'll set the ARN (Amazon Resource Name) of the policy of the other account.

A user who wants to access a role in a different account must also have permissions that are delegated from the account administrator. The administrator must attach a policy that allows the user to call AssumeRole for the ARN of the role in the other account.

This is a required field if you'd like to AssumeRole.

Find more information on AssumeRole.

  • Assume Role Session Name: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons.

By default, we'll use the name OpenMetadataSession.

Find more information about the Role Session Name.

  • Assume Role Source Identity: The source identity specified by the principal that is calling the AssumeRole operation. You can use source identity information in AWS CloudTrail logs to determine who took actions with a role.

Find more information about Source Identity.

  • Database (optional): The database of the data source is an optional parameter if you would like to restrict the metadata reading to a single database. If left blank, OpenMetadata ingestion attempts to scan all the databases.
  • S3 Staging Directory (optional): The S3 staging directory is an optional parameter. Enter a staging directory to override the default staging directory for AWS Athena.
  • Athena Workgroup (optional): The Athena workgroup is an optional parameter. If you wish to have your Athena connection related to an existing AWS workgroup add your workgroup name here.
  • Connection Options (Optional): Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
  • Connection Arguments (Optional): Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
    • In case you are using Single-Sign-On (SSO) for authentication, add the authenticator details in the Connection Arguments as a Key-Value pair as follows: "authenticator" : "sso_login_url"
    • In case you authenticate with SSO using an external browser popup, then add the authenticator details in the Connection Arguments as a Key-Value pair as follows: "authenticator" : "externalbrowser"

Once the credentials have been added, click on Test Connection and Save the changes.

Test Connection

Test the connection and save the Service

In this step we will configure the metadata ingestion pipeline, Please follow the instructions below

Configure Metadata Ingestion

Configure Metadata Ingestion Page

  • Name: This field refers to the name of ingestion pipeline, you can customize the name or use the generated name.

  • Database Filter Pattern (Optional): Use to database filter patterns to control whether or not to include database as part of metadata ingestion.

    • Include: Explicitly include databases by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all databases with names matching one or more of the supplied regular expressions. All other databases will be excluded.
    • Exclude: Explicitly exclude databases by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all databases with names matching one or more of the supplied regular expressions. All other databases will be included.
  • Schema Filter Pattern (Optional): Use to schema filter patterns to control whether or not to include schemas as part of metadata ingestion.

    • Include: Explicitly include schemas by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all schemas with names matching one or more of the supplied regular expressions. All other schemas will be excluded.
    • Exclude: Explicitly exclude schemas by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all schemas with names matching one or more of the supplied regular expressions. All other schemas will be included.
  • Table Filter Pattern (Optional): Use to table filter patterns to control whether or not to include tables as part of metadata ingestion.

    • Include: Explicitly include tables by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all tables with names matching one or more of the supplied regular expressions. All other tables will be excluded.
    • Exclude: Explicitly exclude tables by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all tables with names matching one or more of the supplied regular expressions. All other tables will be included.
  • Include views (toggle): Set the Include views toggle to control whether or not to include views as part of metadata ingestion.

  • Include tags (toggle): Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.

  • Enable Debug Log (toggle): Set the Enable Debug Log toggle to set the default log level to debug, these logs can be viewed later in Airflow.

  • Mark Deleted Tables (toggle): Set the Mark Deleted Tables toggle to flag tables as soft-deleted if they are not present anymore in the source system.

  • Mark Deleted Tables from Filter Only (toggle): Set the Mark Deleted Tables from Filter Only toggle to flag tables as soft-deleted if they are not present anymore within the filtered schema or database only. This flag is useful when you have more than one ingestion pipelines. For example if you have a schema

Scheduling can be set up at an hourly, daily, weekly, or manual cadence. The timezone is in UTC. Select a Start Date to schedule for ingestion. It is optional to add an End Date.

Review your configuration settings. If they match what you intended, click Deploy to create the service and schedule metadata ingestion.

If something doesn't look right, click the Back button to return to the appropriate step and change the settings as needed.

After configuring the workflow, you can click on Deploy to create the pipeline.

Schedule the Workflow

Schedule the Ingestion Pipeline and Deploy

Once the workflow has been successfully deployed, you can view the Ingestion Pipeline running from the Service Page.

View Ingestion Pipeline

View the Ingestion Pipeline from the Service Page

If there were any errors during the workflow deployment process, the Ingestion Pipeline Entity will still be created, but no workflow will be present in the Ingestion container.

  • You can then edit the Ingestion Pipeline and Deploy it again.

  • From the Connection tab, you can also Edit the Service if needed.

Workflow Deployment Error

Edit and Deploy the Ingestion Pipeline