Skip to main content

1.11 Releases

1.11.13 Release
7th March 2026
You can find the GitHub release here.

Collate Fixes

  • Platform: Fixed deploy-pipelines command to correctly deploy ingestion pipelines
Full Changelog: link
1.11.12 Release
5th March 2026
You can find the GitHub release here.

Breaking Changes

OpenLineage Kinesis Support

Refactored OpenLineage connection schema to support both Kafka and Kinesis brokersMigration Required: The OpenLineage connection configuration structure has changed from flat Kafka-specific fields to a nested brokerConfig object. Existing configurations with fields like brokersUrl, topicName, consumerGroupName, consumerOffsets, poolTimeout, sessionTimeout, securityProtocol, sslConfig, and saslConfig at the root level will be automatically migrated to the new brokerConfig.kafkaBrokerConfig structure during the database migration to version 1.11.12. No manual intervention is required for existing deployments, but any external scripts or API integrations that create/update OpenLineage connections must be updated to use the new schema structure.
BeforeAfter
connection.config.brokersUrlconnection.config.brokerConfig.brokersUrl
connection.config.topicNameconnection.config.brokerConfig.topicName
connection.config.consumerGroupNameconnection.config.brokerConfig.consumerGroupName
connection.config.consumerOffsetsconnection.config.brokerConfig.consumerOffsets
connection.config.poolTimeoutconnection.config.brokerConfig.poolTimeout
connection.config.sessionTimeoutconnection.config.brokerConfig.sessionTimeout
connection.config.securityProtocolconnection.config.brokerConfig.securityProtocol
connection.config.sslConfigconnection.config.brokerConfig.sslConfig
connection.config.saslConfigconnection.config.brokerConfig.saslConfig
Kafka-onlyKafka or Kinesis (via brokerConfig oneOf)

Redshift IAM Authentication Support

Added support for IAM authentication for Amazon Redshift connections #26179Migration Required: The Redshift connection schema has changed from a flat password field to a polymorphic authType object supporting both Basic Auth and IAM Auth. Existing configurations with password at the root level will be automatically migrated to authType.password (BasicAuth structure) during the database migration to version 1.11.12. No manual intervention is required for existing deployments, but any external scripts or API integrations that create/update Redshift connections must be updated to use the new schema structure.
BeforeAfter
connection.config.passwordconnection.config.authType.password
Password-only authenticationBasicAuth or IAM Auth (via authType oneOf)

Collate Improvements

  • AI Platform: Improved description generation agent to handle wide tables with many columns through better batching and retry logic
  • Platform: Enhanced reindexing app to support sharding for indexes with over 2,000 entities

Collate Fixes

  • Platform: Fixed intermittent OAuth login failures by adding SameSite=None attribute to session cookies for cross-site authentication flows
  • Platform: Fixed ingestion pipeline runner inheritance so pipelines correctly inherit runner configuration from their parent service
  • UI: Fixed Hebrew language selection causing right-to-left layout issues and missing landing page widgets
Full Changelog: link
1.11.11 Release
25th February 2026
You can find the GitHub release here.

Collate Improvements

  • Data Observability: Removed non-compliant nolitsa library dependencies to mitigate compliance risks.
  • Platform: Updated default reindexing schedule to 12:30 AM UTC to avoid snapshot conflicts.
  • Integration: Improved dbt ingestion memory efficiency when listing files from GCS to prevent OOM errors.

Collate Fixes

  • Platform: Fixed blank page issue after upgrade by handling refresh failures with proper logout action instead of getting stuck.
  • Platform: Fixed global domain filter not translating when language is changed.
  • Integration: Fixed multiple CSV parsing errors and table name issues during Datalake metadata ingestion.
  • Governance: Fixed scroll not working in Data Product descriptions with large content.
  • Integration: Fixed Snowflake DDL fetch failures for tables with dots in names by properly quoting identifiers in GET_DDL queries.
  • Integration: Fixed Snowflake ingestion by skipping irrelevant definition fetching for Snowflake stages.
Full Changelog: link
1.11.10 Release
18th February 2026
You can find the GitHub release here.

Improvements

  • Added new BurstIQ database connector
  • Implemented Athena struct profiling following established patterns
  • Added support for displayName extraction from user profile during login
  • Excluded unused fields from suggestion API calls to improve performance

Fixes

  • Fixed ingestion pipelines incorrectly storing the ingestion runner
  • Fixed ingestion runner fetching
  • Fixed horizontal scroll not working for table components
  • Fixed enum default values being dropped during POJO generation
  • Updated axios library to address a security vulnerability
  • Fixed usageLocation not being added to the BigQuery connection URL
  • Fixed Oracle stored procedure line ordering with ORDER BY clause; ensured packages are ordered before package bodies
  • Fixed DB2 iSeries connection issue
  • Added Dagster asset key prefix configuration for lineage
  • Fixed Airflow Task Instance link not correctly filtering by DAG
  • Fixed Metabase dataset query type detection for native queries and improved error logging
  • Fixed ingestion pipeline exit handler skipping status update when pipeline is already in a terminal state
  • Resolved Flowable suspension state conflict after migration
  • Fixed Delete Table Constraint when the column is not present in the patch request
  • Fixed NoneType attribute error on usage workflow cost calculation
  • Fixed Out-of-Memory (OOM) issue during large CSV processing
  • Fixed boolean parameters in test case forms that could not be toggled OFF
  • Fixed nested fields repetition issue in table display
Full Changelog: link
1.11.9 Release
16th February 2026
You can find the GitHub release here.

Improvements

  • Enabled “Updated By,” “Synonyms,” “Related Terms,” and “Status” fields in workflow check conditions
  • Added search functionality in the ingestion runner dropdown and fallback logic to select Collate SaaS Runner by default
  • Optimized Unity Catalog lineage retrieval by using system tables instead of the API
  • Added bulk APIs for checking pipeline status
  • General improvements to import/export functionality

Fixes

  • Fixed Databricks view definition extraction and resolved circular lineage issues
  • Fixed BigQuery project ID input handling in the ingestion class
  • Fixed Trino shadowing of the http_scheme argument
  • Added Airflow DriveService to the entity class map
  • Fixed source hash instability during ingestion runs to prevent unnecessary updates
  • Fixed CSV parsing logic for escape characters and resolved timeout issues during CSV imports
  • Fixed column tag insertion and removal during Database Service imports
  • Fixed issue where nested columns were not updating in real-time
  • Added auto-scroll to form errors when they occur
  • Fixed translation for “most-viewed-data-assets”
  • Fixed filters for Impact Analysis
  • Fixed issue where sourceHash-only changes were generating unwanted notification emails
  • Fixed aggregation incident re-indexing
  • Added missing Data Quality SDK parameters
Full Changelog: link
1.11.8 Release
4th February 2026
You can find the GitHub release here.

Improvements

  • Added BigQuery lineage support for PowerBI .pbit files
  • Added sobjectNames field for multi-object selection in Salesforce connector
  • Added PBI rdl report lineage
  • Included API result properties descriptions from OpenAPI schemas

Fixes

  • Fixed PowerBI .pbit parser failing on multiline DAX expressions
  • Fixed tag search in mUI Tag Suggestion
  • Fixed app stuck on refresh call for basic
  • Fixed testsuite result summary
  • Fixed okta renewal
  • Fixed Ingest Schema & View Definition For Unity Catalog
  • Fixed DB2 TLS certificate connection issue
  • Fixed Snowflake View DDL Fallback To Preserve Exact Case Identifiers
  • Skip and warn when autoclassification values are too long
  • Deploy pipeline before DB update to prevent inconsistent state
  • Clean OpenMetadataWorkflowConfig in IngestionPipeline
Full Changelog: link
1.11.7 Release
28th January 2026
You can find the GitHub release here.

Improvements

  • Improved execution time tracking with more detailed per-task metrics and logging
  • Added timeout handling for temp table lineage graph node processing
  • Enhanced Event Based Workflows to show relevant exclude fields
  • Improved Kafka Connect storage lineage handling and related enhancements
  • Optimized BigQuery ingestion performance
  • Added support for maxResultSize in QueryRunnerRequest schema
  • Enhanced AsyncService with retry support, exponential backoff, and timeout handling
  • Added proper filters for domain assets
  • Updated collate-data-diff dependency version

Fixes

  • Fixed unique count metric errors for BigQuery with cardinality updates
  • Fixed UI tab remount issue
  • Fixed Unapproved Terms should not be linked during bulk import
  • Fixed handling of Snowflake tags without values by skipping tag creation
  • Fixed DB SM does not strip secret prefix
  • Fixed count import issues in profiler interface
  • Fixed UI issue where removing custom properties did not update the UI
  • Fixed pipeline status ingestion issues with special characters
  • Fixed Tableau db_service_entity being None
  • Fixed advanced search issues related to custom properties
  • Fixed CSV export issues by properly escaping quoted fields containing commas
  • Fixed classification page loading issues
  • Fixed JSONLogic query builder suffix handling
  • Fixed incorrect private key and passphrase fetching logic
  • Added logging to TestCaseResultRepository
  • Fixed UI plugin loading based on navigationItems configuration
Full Changelog: link
1.11.6 Release
21st January 2026
You can find the GitHub release here.

Improvements

  • Added impersonatedBy support in MCP tools
  • Added exponential backoff retry for index deletion during snapshot operations
  • Added Entity Popover Card for Impact Analysis tab
  • Added metadata versioning for bulk import
  • Added support for Snowflake stages
  • Added failed rows sample table to test case notifications
  • Added task.description field to Task entity mapping
  • Added start time, end time, and runtime duration to pipeline task execution views
  • Added Dagster Lineage support
  • Added API endpoint filter pattern support
  • Added JWT/SAML team claim mapping for automatic team assignment during SSO login
  • Added support for testing access to specific containers in Azure Blob Client
  • Updated timezone display with date and time
  • Migrated image links from imgur to own CDN
  • Improved lineage visualization by graying out hanging nodes edges

Fixes

  • Fixed entity rules enforcement for data assets sections
  • Fixed StackOverflowError from circular team hierarchy dependencies
  • Fixed overrideLineage config removal from database service metadata pipeline
  • Fixed deleted users filtering from ownership relationships in GET operations
  • Fixed text overlapping in Users table columns
  • Fixed user creation via UI when SSO authentication is enabled
  • Fixed domain rename issues
  • Fixed REST connector default tag missing for endpoints
  • Fixed Chinese character encoding issue
  • Fixed data contract status in List API
  • Fixed Secrets Manager empty string sanitization
  • Fixed Oracle view definitions retrieval
  • Fixed MCP SDK to resolve Cursor ‘form’ field deserialization issue
  • Fixed uniform delta table ingestion in Databricks
  • Fixed circular reference detection in team imports
  • Fixed Pipeline Service Client health job cleanup in favor of on-demand checks
  • Fixed react-router-dom to address XSS and redirect vulnerabilities
  • Fixed MCP tool responses and parameter naming for better LLM integration
  • Fixed import failure for database service names containing dots
Full Changelog: link
1.11.5 Release
14th January 2026
You can find the GitHub release here.

Improvements

  • Added lineage query parser type configuration
  • Added Pipeline Obs averageRunTime metric
  • Added batching for Trino lineage and usage SQL query retrieval
  • Added dataflow metadata support in PowerBI
  • Added support for custom pg_stat_statements view in Postgres lineage ingestion
  • Added stored procedures and functions support to MySQL connector
  • Added support for renaming Data Products with Playwright test coverage
  • Added persona customization support for Data Products
  • Added GlossaryTermDomainWidget to handle domain selection logic in UI
  • Added depth mode for improved navigation
  • Added Stored Procedure object support in lineage editor
  • Added validation for jwtPrincipalClaimsMapping to enforce username and email keys only
  • Added report description in PowerBI metadata
  • Added tagging explanation feature
  • Added Snowflake dynamic table system metrics support
  • Added Mulesoft connection configuration
  • Added fallback to Okta renew strategy
  • Added extraHeaders support to Java client
  • Added mf4 file format support in Datalake connector
  • Added Tabs component support with Storybook integration
  • Added SASL_PLAINTEXT support for OpenLineage Kafka broker
  • Added opt-in SSO auto-redirect on sign-in page
  • Added entity names display in browser tab titles
  • Added Arabic language support with comprehensive translations
  • Added missing Arabic translations for audit logs, certification, and UI elements
  • Added Google Tag Manager integration for sandbox
  • Added restriction for duplicate glossary term creation in dotted glossaries
  • Added subdomains support in DomainDetails component for data products
  • Added optimized propagation flag for improved performance
  • Added Playwright test coverage for test cases
  • Added prefix matching support in domain search
  • Added Redshift Serverless support
  • Added Doris view lineage ingestion support
  • Added domain and custom properties ingestion functionality via dbt agent
  • Performance improvement for APIs by caching User Context
  • Upgraded Playwright from 1.51.1 to 1.57.0

Fixes

  • Fixed data product specs
  • Fixed platform lineage spec by using fewer nodes to prevent screen freeze
  • Fixed service details pagination issue
  • Fixed team spec and glossary duplicate spec
  • Fixed encoding issue related to test case name search in incident manager
  • Fixed flaky DataConsumer spec
  • Fixed CSS styling for upstream expand
  • Fixed AWS Region requirement for advanced connection config
  • Fixed DB2 Connector TLS support
  • Fixed domain details rename functionality
  • Fixed Datalake CSV parsing issue
  • Fixed ServiceEntityPermissions test flakiness
  • Fixed Redshift table size unit display
  • Fixed service details page pagination issue
  • Fixed styles for Data Observability and Quality tabs
  • Fixed self-referencing lineage loops in Fivetran source
  • Fixed error handling in ingestion pipelines
  • Fixed pandas global import issue
  • Fixed incident manager Playwright test
  • Fixed Java checkstyle violations
  • Fixed UnityCatalog lineage debug logging
  • Fixed column ordering for top databases
  • Fixed parquet file reading
  • Fixed logic to show leftSideBarItems
  • Fixed impact analysis search for column level
  • Fixed PowerBI project filters exclude case
  • Fixed scrollbar visibility for horizontal scroll
  • Fixed encoding of data-products for URL formatting
  • Fixed SSO SAML Playwright test
  • Fixed role hierarchy to get inherited roles and teams
  • Fixed secrets deletion to only occur on hard delete
  • Fixed MCP server SDK support upgrade
  • Fixed SAML authority URL deprecation and UI visibility
  • Fixed data product domain configuration
  • Fixed lineage edges disappearing when column is selected
  • Fixed test suite alerts status handling
  • Fixed MySQL data_diff URL to include database
  • Fixed special characters handling in passwords for TableDiff URL parsing
  • Fixed Pinot Multi Stage with Percentile and Length function
  • Fixed Tableau ingestion owner retrieval for users without email
  • Fixed CockroachDB ingestion for tables with hidden shard columns
  • Fixed Playwright test flakiness
  • Fixed domain display name references in configuration files
  • Fixed security vulnerability by bumping org.mozilla:rhino
  • Fixed SAML authorityUrl removal from MCP Config
  • Fixed dropdown auto-open behavior in edit popovers for improved UX
  • Fixed SAP HANA renamed columns/attributes handling in lineage creation
  • Fixed SAML redirection URL logic
  • Fixed sample data profiling by adding cachetools library
  • Fixed sample data profiling issues
  • Fixed hover color and selected color customization
  • Fixed missing i18n translations for UI components
  • Fixed SCIM token expiry to unlimited
  • Fixed invalid task issue in Airflow
  • Fixed search bar visibility when no results in AssetsTabs
  • Fixed client-side navigation by replacing MUI Link with react-router-dom Link
  • Fixed S3 storage ingestion to filter _SUCCESS files
  • Fixed spec by passing the test ID
  • Fixed data product empty list page
  • Fixed Snowflake transient tables ingestion
  • Fixed tab component to use global font size
  • Fixed Databricks parser with expression resolving
  • Fixed node path handling in lineage
  • Fixed security vulnerabilities by bumping qs & @tiptap/extension-link packages
  • Fixed Snowflake tag ingestion
  • Fixed logout to clear all stale state and tokens
  • Fixed minor UI styling in schema tab
  • Fixed entity updates in lineage sidebar and custom properties tab visibility
  • Fixed malformed URL generation in QueryBuilder
  • Fixed right panel flakiness
  • Fixed NiFi 2.x compatibility by removing GET /access call
  • Fixed invalid ‘domains’ field error when filtering Domain entities in Glossary Term
Full Changelog: link
1.11.4 Release
24th December 2025
You can find the GitHub release here.

Improvements

  • Add AI Health Settings (CAIP-199). (Collate)
  • Monthly Rate Limits for Credit Usage. (Collate)
  • Add POV option to deployment enum in limits config. (Collate)
  • Force recomputation of vector index if needed (AI #197). (Collate)
  • Add component for Knowledge Center in AskCollate. (Collate)
  • Keep completed workflows for 4 weeks in Argo. (Collate)
  • RDF enablement on Collate. (Collate)
  • Add minimum height to Dimension tables in UI. (Collate)
  • Display no dimension and stats in Data Quality. (Collate)
  • Added SQLGlot parser support for improved query parsing
  • Added support for bulk edit on nested glossary terms
  • Added lineage section in overview tab in right panel
  • Improved lineage node column pagination
  • Added page size dropdown option to MUI table pagination
  • Enhanced dbt functionality with new features
  • Added username and preferred_username support
  • Improved system repository health with extra validations
  • Streamable ingestion logs to log versions
  • Upgraded MCP SDK to 0.14.0 for protocol 2025-06-18 support
  • Refactored field type and operators for enum custom properties
  • Refactored and improved glossary term operations
  • Removed redundant updateMetadata from Workflow Set Action
  • Modified logic to use parameterized queries for security
  • Improved test connection speed using has_table_privilege for partition details
  • Allowed listing test case results with no dimensions

Fixes

  • Fix runner test connection. (Collate)
  • Fix Health page checking CAIP with ingestion-bot and AskCollate displayName migration. (Collate)
  • Fix Metapilot cleanup in UI. (Collate)
  • Fix: Add column lineage support for Matillion pipeline. (Collate)
  • Fix: Tag and glossary term in the automator action config not shown in the edit form. (Collate)
  • Fix: Multiple ‘No dimension’ card being displayed in UI. (Collate)
  • Fix: Updated scim mapping creation migration to support default charset. (Collate)
  • Fixed clusterAlias issue with /getPlatformLineage API
  • Fixed index creation on start and later reverted
  • Fixed security vulnerabilities
  • Fixed dbt attribute errors
  • Fixed low cardinality support for ClickHouse
  • Fixed match function for ClickHouse
  • Fixed DBT override lineage configuration
  • Secured DefaultTemplateProvider against template injection
  • Fixed data files for Qlik connector
  • Fixed glossary term /search API
  • Fixed glossary term search relevance scoring
  • Fixed time conversion issue for table freshness
  • Fixed usage entity already exists error
  • Fixed infinite loader issue in lineage section
  • Fixed disabled default certifications still visible on assets
  • Fixed entity type not being sent inside the EntityReference object
  • Fixed SDK issue with deserializing ‘setterless’ property ‘dataProducts’
Full Changelog: link
1.11.3 Release
17th December 2025
You can find the GitHub release here.

Improvements

  • Added support for multiple tables in Great Expectations checkpoints, including mappings for add_pandas and add_query_asset actions.
  • Improved the asset tab UI based on user feedback.
  • Introduced the Collate Credits system for credit-based billing and usage limit management.

Fixes

  • Fixed an issue where the alerts UI did not display the return code after modifying Advanced Configurations.
  • Fixed the backend to allow listing test case results without data quality dimensions.
  • Fixed the tab renaming functionality on the Domain page in Persona customization.
  • Fixed a UI overflow issue for long certificate file names in the service form.
  • Fixed an issue where deleted glossaries still appeared in the list by implementing optimistic deletion with rollback.
  • Fixed mention recipient processing and corrected the entityLink format for Task and Announcement entities.
  • Fixed incorrect updatedBy reference tracking for newly created nodes in workflows. (Collate)
  • Fixed inaccurate token count tracking in the Token Usage serializer. (Collate)
  • Fixed app usage calculation using BigDecimal precision and updated the UI to display credits. (Collate)
Full Changelog: link
1.11.2 Release
12th December 2025
You can find the GitHub release here.

Fixes

  • Fix General tags recognizers migration
Full Changelog: link
1.11.1 Release
11th December 2025
You can find the GitHub release here.

Highlights

  • Wherescape ingestion now captures richer metadata and lineage coverage.
  • Query Runner aligns to the SQL Studio experience with clearer messaging and stability.
  • Auto-classification now leverages TagProcessor for more accurate tagging.
  • Vector search services are pre-initialized to keep reindexing stable.

Fixes

  • Access URLs use hash routing to avoid broken links in shared views.
  • Slack message listener reliability improved to prevent missed notifications.
Full Changelog: link
1.11.0 Release
3rd December 2025
You can find the GitHub release here.

Features

Ask Collate

AskCollate is our AI-powered conversational agent that gives you natural-language access to your data catalog and analytics. It combines metadata intelligence with direct data access, helping you discover, understand, analyze, and manage your data assets through simple, intuitive conversations.

Key capabilities

Semantic Search: Instantly locate data assets using natural-language queries with advanced filters such as owner, domain, tag, certification, tier, and more.Metadata Updates: Enrich your catalog by updating descriptions, owners, tags, and other metadata directly through conversational commands.Text-to-SQL Analysis: Convert plain-English questions into complex SQL queries to retrieve and analyze real-time data from your warehouse (read-only).Instant Visualizations: Automatically transform SQL results into interactive bar, line, pie, and area charts---right inside the chat.Lineage & Impact: Explore upstream and downstream dependencies to visualize data flow and assess the potential impact of changes.Data Quality RCA: Diagnose data-quality issues and trace how test failures propagate across your pipelines.Data Quality Planner: Generate data quality tests instantaneously using context from your metadata and the entity descriptive statisticsBusiness Glossary Management: Build and maintain a unified data vocabulary by defining terms, synonyms, and hierarchical relationships through chat.

Ask Collate Slack Integration

You can now use AskCollate directly within Slack! This integration brings the full power of AskCollate to the place where your teams already collaborate, making it easier than ever to ask questions and access data context the moment you need it.Simply mention @AskCollate in any channel or thread to tap into your data instantly.

SQL Studio

Collate has always been the central hub to discover and understand your data, track transformations and lineage, profiling, and data quality, all in one place.With 1.11, we’re taking this a step further by giving your users the ability to query and analyze data directly within Collate.SQL Studio lets users create personal connections to existing services and start running SQL queries while exploring the catalog.As a service owner, you maintain full control over which services are available in SQL Studio and how users authenticate---whether through SSO, OAuth, or basic credentials.Version 1.11 introduces support for Snowflake, BigQuery, and Trino, with additional engines coming in future releases.

Data Quality as Code

Define and execute data quality tests programmatically using Python, with results automatically published to OpenMetadata’s Data Observability dashboard. Integrate validation directly into your transformation pipelines with circuit breaker patterns to prevent bad data from reaching production tables.Two validation approaches:TestRunner --- Validate data already loaded in tables. Reference any table by its fully qualified name (FQN), add any test cases, then execute tests and automatically push results back to OpenMetadata. Ideal for post-load validation and monitoring.DataFrameValidator --- Validate data inline during ETL pipelines. Acts as a circuit breaker to prevent bad data from being loaded. Define on_success and on_failure callbacks to control pipeline behavior. Supports chunked processing for large datasets. Optionally publish results to a specific table in OpenMetadata.What happens when tests run:Results automatically sync to OpenMetadata’s Data Quality tab. Failed tests generate incidents with detailed failure reasons (e.g., “minimum value below zero: -521”). Incident alert icons appear on affected tables. The Overview tab shows passing and failing dimensions at a glance.Why use it:Centralize data quality test logic in version control tools. Provide reusable validation libraries for data engineers. Execute tests as part of your pipeline, not as a separate process. Maintain visibility in Collate while managing logic externally.

Data Quality Dimensionality

You can now create dimension-level data quality tests that automatically segment results by categorical column values, providing granular visibility into data quality across different data segments.

What’s New

When adding a test case in the Data Observability tab, select “Dimension Level” to associate one or more dimension columns (e.g., status, region, product category) with your column-level test. The new configuration panel provides guidance on use cases and allows you to select multiple dimensions for multi-dimensional analysis.Test cases with dimensions display a badge indicator showing the number of associated dimensions, making them easy to identify in your test case list.A new “Dimensionality” tab in test results provides:
  • A calendar heatmap showing pass/fail status across all dimension values over time
  • A summary table with status, impact score, dimension value, and last run timestamp for each segment
  • Drill-down capability to view historical execution trends for any specific dimension value
Impact scores help you quickly identify which dimension values are contributing the most failures based on affected row counts.Why It Matters:Previously, testing data quality across different categorical segments required writing custom SQL queries (e.g., SELECT … WHERE status = ‘completed’) and maintaining separate test cases for each known value. When new dimension values appeared in your data, you had to manually create additional tests. With dimension-level test cases, a single test automatically covers all current and future values in your dimension column, eliminating maintenance overhead while providing deeper insights into where data quality issues occur.

Notification Templates

You can now fully customize the content and format of alert notifications delivered to Slack, Microsoft Teams, Google Chat, or email.Manage templates globally or per-alertNavigate to Settings → Notifications → Templates to view and manage all notification templates. From here you can edit system default templates (which apply globally) or create new reusable templates available across your organization.Build dynamic templates with placeholders
Use double curly brace syntax to insert dynamic fields that automatically populate when alerts trigger:
  • {{entity.name}} — Entity display name
  • {{entity.owners}} — Entity owners
  • {{entity.description}} — Entity description
  • {{entity.href}} — Direct link to the entity
  • And more
Add conditional logic and rich formattingTemplates support conditional formatting with {{#if}}, {{else}}, and {{/if}} blocks. The rich text editor lets you add bold, italic, code blocks, images, links to assets, and other formatting.Validate before savingClick Validate to check your template syntax before saving. Invalid placeholders or syntax errors are highlighted so you can fix them before deployment.Flexible template assignmentWhen creating an alert, choose to use the system default template, select a custom template from your organization’s library, or create a new template specific to that alert. You can also configure alerts to notify downstream asset owners with customizable depth settings.

Breaking Changes

Elasticsearch & Opensearch Version Changes

  1. Elasticsearch server version: Verify whether the server is running version 8.x. If it is running an earlier version, please upgrade to 8.x before proceeding.
  2. OpenSearch server version: Verify whether the server is running version 2.x. If it is running an earlier version, please upgrade to 2.x before proceeding.

MySQL Configuration Required for Airflow 3.x Migration

If you are using MySQL as your Airflow metadata database and upgrading to Airflow 3.x (the new default in OpenMetadata 1.11), you must configure MySQL to allow temporary stored function creation during the migration process.

Root Cause

During the Airflow 3.x database migration on MySQL, Airflow needs to create a temporary stored function (uuid_generate_v7) to backfill UUIDs for the task_instance table. When MySQL runs with binary logging enabled (which is the default in most production setups), it blocks function creation unless log_bin_trust_function_creators is enabled or the user has SUPER privileges. Without this configuration, the migration fails with an error like:
FUNCTION airflow_db.uuid_generate_v7 does not exist
This is a known limitation when running Airflow 3.x migrations on MySQL with binary logging enabled. PostgreSQL users are not affected by this issue.For more details, see the Apache Airflow issues:

Resolution

Option 1: Delete and Recreate the Airflow Database (Strongly Recommended)The simplest and most reliable solution is to delete the existing Airflow database and let OpenMetadata recreate it fresh during startup. The Airflow database only stores workflow execution history and metadata—it does not contain any of your OpenMetadata configurations, connections, or ingestion pipeline definitions.
This is the recommended approach because it avoids all migration complexities and ensures a clean state. Your ingestion pipelines and their configurations are stored in the OpenMetadata database, not in Airflow’s database.
# Connect to your MySQL instance and drop the Airflow database
docker exec -i openmetadata_mysql mysql -u USERNAME -pPASSWORD -e "DROP DATABASE IF EXISTS airflow_db;"
Then recreate the database with the proper character set and grant privileges:
CREATE DATABASE airflow_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL PRIVILEGES ON airflow_db.* TO 'airflow_user'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
Execute this via command line:
# Recreate the Airflow database
docker exec -i openmetadata_mysql mysql -u USERNAME -pPASSWORD -e "CREATE DATABASE airflow_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; GRANT ALL PRIVILEGES ON airflow_db.* TO 'airflow_user'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES;"

# Restart the ingestion container to run migrations on the fresh database
docker restart openmetadata_ingestion
Replace USERNAME and PASSWORD with your MySQL root credentials, and airflow_user with your actual Airflow database user if different. For Docker Quickstart deployments, the default root credentials are root / password.
After the ingestion container restarts successfully, you must redeploy all your ingestion pipelines from the OpenMetadata UI. This registers the DAGs in the fresh Airflow database.
Option 2: Manual Migration Fix (If You Cannot Delete the Database)If you have specific requirements to preserve the Airflow execution history and cannot delete the database, follow the manual steps below.Step 1: Enable MySQL ConfigurationFirst, enable log_bin_trust_function_creators in your MySQL instance to allow Airflow to create the necessary stored function:For Docker deployments, add this to your docker-compose.yml file under the MySQL service:
services:
  mysql:
    command: "--log-bin-trust-function-creators=1"
For standalone MySQL instances, execute this query as a user with sufficient privileges:
SET GLOBAL log_bin_trust_function_creators = 1;
Step 2: Clean Airflow DatabaseAfter enabling the MySQL configuration, choose one of the following options based on your situation:Option 2a: Truncate Task Instance TableIf you want to avoid conflicting migration changes, you can truncate the task_instance table. This approach removes all task execution history but preserves your DAGs and connections.
This will delete all historical task execution data. Only use this if you’re okay with losing task run history.
-- Clean task_instance table to avoid migration conflicts
USE airflow_db;

-- Truncate task_instance table
TRUNCATE TABLE task_instance;

-- Verify the table is empty
SELECT COUNT(*) FROM task_instance;
Execute this script:
# Run the cleanup script on your MySQL container
docker exec -i openmetadata_mysql mysql -u USERNAME -pPASSWORD -e "USE airflow_db; TRUNCATE TABLE task_instance; SELECT COUNT(*) as remaining_rows FROM task_instance;"

# Restart the ingestion container to apply migrations
docker restart openmetadata_ingestion
Option 2b: Fix Stuck Migrations (If Migration Already Failed)If your migration is already stuck midway (the task_instance table was partially modified), you need to reset the migration state before restarting. Save the following SQL script as fix_airflow_migration.sql:
-- Fix Airflow 3.x migration issue
-- This script fixes the partial migration of task_instance table

USE airflow_db;

-- Check if the migration was partially applied
-- If 'id' column exists but isn't properly configured, we need to fix it

-- First, check the current state
SHOW COLUMNS FROM task_instance LIKE 'id';

-- Drop the problematic column if it exists
SET @exist := (SELECT COUNT(*) FROM information_schema.COLUMNS
    WHERE TABLE_SCHEMA = 'airflow_db'
    AND TABLE_NAME = 'task_instance'
    AND COLUMN_NAME = 'id');

SET @sqlstmt := IF(@exist > 0,
    'ALTER TABLE task_instance DROP COLUMN id',
    'SELECT ''Column does not exist'' AS status');

PREPARE stmt FROM @sqlstmt;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

-- Reset the alembic version to before this migration
-- The migration that's failing is: d59cbbef95eb (Add UUID primary key to task_instance)
-- We need to set it back to the previous version: 05234396c6fc
UPDATE alembic_version SET version_num = '05234396c6fc' WHERE version_num = 'd59cbbef95eb';

-- Verify the changes
SELECT * FROM alembic_version;
SHOW COLUMNS FROM task_instance LIKE 'id';
Then execute the script and restart the container:
# Run the fix script on your MySQL container
docker exec -i openmetadata_mysql mysql -u USERNAME -pPASSWORD < fix_airflow_migration.sql

# Restart the ingestion container
docker restart openmetadata_ingestion
Replace USERNAME and PASSWORD with your actual MySQL credentials, and ensure the database name matches your configuration (default is airflow_db).

Collate - Deprecating Metapilot

We are removing the Metapilot UI in favor of the new Ask Collate features. We are not removing any functionality, but enhancing both the chat experience and query optimization agents.
  • Instead of the floating Metapilot icon on the corner of the UI, now we have a dedicated AskCollate interface you can access from the navigation panel. There, you can interact with AskCollate to answer your questions both on data and metadata.
  • The chat experience when analyzing queries in the Queries tab of table assets will now also use the new SQL Agent, while maintaining the existing experience.

Deprecating Python 3.9

Python 3.9 became EOL in October 2025, and most of the Ingestion Framework dependencies already dropped its support.We are now removing support for Python 3.9 in the ingestion framework and adding support for Python 3.12.

Airflow 3.X will be the new default

As part of the python changes, we’re also updating the default OSS ingestion image to be based on Airflow 3.X.If you are still using 2.10.X in your own custom images, we will still support that version.

POST api/v1/dataQuality/testCases Permission Change

We previously enforced the EditTests operations on the Table resource for creating test case permission. We have now introduced a new CreateTests operation on the Table resource for finer grain permission control over create vs edit tests for Table entities.If you previously had an EditTests operation for a Table resource on your policy meant to prevent creation of test cases you will need to add the CreateTests operation as part of your policy.

Changelog

Fixes

  • Security & dependencies: multiple vulnerability fixes and dependency updates (netty, commons-lang3, org.json, angus mail, protobuf) and pinned pydantic to less than 2.12.0 to address compatibility issues.
  • UI fixes: resolved numerous UI bugs including import warning messages, left-sidebar settings icon, drawer loading overlay scroll behavior, Data Product icon stroke width, default font sizing and color, tab rendering and persona page issues, duplicate owners/tier fields, activity feed header/navigation and widget title widths, console warnings, lineage paging/rendering, edit-lineage button placement, description rendering on non-Chromium browsers, and other component errors and warnings.
  • Search & indexing: fixes for vector search index creation/validation, prevention of truncation in Elasticsearch indices (added explicit fqnHash and lineage mappings with ignore_above 512), restored missing search documents for incidents, and addressed large-parameter and indexing edge cases.
  • Query Runner & DB fixes: enabled parallel execution for Query Runner; DB query fixes (SSO/OAuth); SELECT-only enforcement; saved-query per user; pagination added for Snowflake usage/lineage queries; various Snowflake/BigQuery CLI and query-related bug fixes.
  • Data quality & dimensionality: fixes to Data Quality dashboard filtering, freshness tests, dimensionality validators, incident reporting and consolidated ChangeEvents for test runs; migrations and retention fixes for test case results and profile data.
  • Connectors & lineage parsing: Databricks/DLT auth and parsing fixes, PowerBI native query lineage extraction and custom API URL fixes, Kafka Connect lineage and Confluent Cloud parsing fixes, BigQuery exporter bug fixes, Collibra connector fixes.
  • API & backend fixes: domain assets count mismatch, glossary term circular/self-referential parent handling (preventing API hangs), table diff validation parameter (table2.keyColumns), support for datamodel source URL, explicit fixes for view-name scoping and local-variable issues, and prevention of double notifications on cover image upload failures.
  • Authentication, tokens & SSO: LDAP login retry improvements, SAML/Azure AD timestamp compatibility, Support App token evaluation fixes, improved bot/OIDC handling, and explicit bearer-token error messaging.
  • Migrations & reliability: migration fixes and adjustments for multiple versions (including 1.10.x → 1.11.x moves), zero-downtime reindexing orphan cleanup, socket/connect timeout increases to 30s, fixes to prevent streamable log leaks, and other reliability fixes.
  • Miscellaneous: fixed incidents and notification issues (missing Slack notifications for workflow-generated approval tasks, incorrect user/team URLs in notifications), improved Okta public key URL handling, fixed DBT and Snowflake integration issues, and numerous other cross-area bug fixes.

Improvements

  • Custom Workflows & UI: full implementation and UI revamp for Custom Workflows, Knowledge Center and Overview improvements, domain & data-product field support, project/explorer card enhancements, domain tree view, pipeline view node/edge support, and multiple UI styling and layout improvements across the app.
  • AskCollate & AI experience: AskCollate UI and chat enhancements, CAIP-related pipe updates, agent improvements, AskCollate Slack integration and chat/profile components for messages.
  • SQL & Query Runner enhancements: saved-query per user, improved userAuthConfig response for UI, improved logging, SELECT-only enforcement improvements, and performance/pagination improvements for Snowflake usage/lineage.
  • Connectors & exporters: added or improved connectors and exporters including AWS Kinesis Firehose, BigQuery exporter, Collibra connector, Hex dashboard connector, Kafka Connect Confluent Cloud support, ADLS unstructured containers, Collibra and Hex integrations, and PowerBI improvements (custom API URL, databricks lineage parsing).
  • Embeddings & vector search: DJL local embeddings support, efficient k-NN filtering, increased neighbor limits, soft/hard delete handling, embedding model tracking, and improved vector index validation.
  • Data Observability & Data Quality as Code: new Data Quality as Code APIs, DataFrame/TestRunner improvements, support for dimension-level DQ tests and a wide set of dimensionality validators (mean, median, min/max, sum, stddev, regex, not-null, uniqueness, etc.), plus UI updates for dimensionality analysis and test result exploration.
  • Notifications & templates: enhanced notification templates with rich formatting, Handlebars helpers, template preview and test send, transformers, and permission controls for templates; notification template UI added.
  • Performance & reliability: ingestion log streaming & caching improvements (streaming + caching for downloads), Redis added as an optional cache, increased socket/connect timeouts to 30s, improved handling for streamable logs, and general performance tweaks.
  • Search & indexing improvements: Search reindex enhancements (selective entity reindex), improved stemmer language support for OpenSearch, unified ES/OS client API with separate index management, and other search reliability improvements.
  • Observability & tooling: added workflow resource utilisation metrics to aid troubleshooting, improved metrics page docs and messaging, and better logging and error messages across services.
  • Miscellaneous improvements: Impersonation by bots, bulk update APIs for data assets, selective entity reindex for passed entity refs, added support for classification tags in dbt meta field, Kafka lineage support in Databricks pipelines, Hex & Collibra connectors, improved Tableau logging, and other end-user enhancements.
Full Changelog: link