We are pleased to announce March 2026 Release of Access Data Engine
This release introduces Dynamic Views – a powerful new data object type – alongside significant improvements to Data Streams, Data Flows. New navigation improvements and platform-wide enhancements further improve the user experience.
The following sections detail all new features and enhancements
The following sections detail all new features and enhancements.
Data Flows
Improved Navigation
A new left-hand panel in the Data Flow editor provides a faster and more organized way to browse and add data sources:
• Browse tables by folder
• Drag and drop tables directly onto the canvas
• Resize or collapse the panel to suit your workspace
Import Legacy Views into Data Flows
Any legacy View based on a Table can now be imported into a Data Flow as a set of nodes – complementing the existing import capability for legacy Fusions and Merges. This makes it straightforward to transition all legacy data objects into Data Flows, improving efficiency and long-term maintainability.
Note: legacy Views, Merges, and Fusions will be deprecated in a future release. We encourage users to begin migrating these objects to Data Flows.
Tables
(NEW) Dynamic Views
Dynamic Views are a new type of data object built on top of an existing Table. They provide a flexible way to control how data is presented and shared, without modifying the underlying table.
Key capabilities:
• Filter rows by specific values, a formula, or a User / Team Parameter
• Rename, reorder, and hide columns
• Changes are reflected dynamically – no need to rebuild or duplicate data
Example use cases:
• Create a filtered dataset per customer or team for use in a Data Stream
• Prepare a clean, limited-column projection for export or Data Stream delivery
• Apply dynamic date-range filters without rebuilding the source table
Unique Key Customization
Unique keys on SmartViews can now be set and modified in the Schema Editor. Keys are still defined automatically where appropriate but can now always be adjusted to match your specific data requirements.
Data Streams
New Refresh Methods: Update and Append & Update
Two new refresh methods are now available when sending data to database destinations:
• Update: update existing rows in the destination table based on a matching key.
• Append & Update: insert new rows and update existing ones in a single operation (upsert).
Supported destinations: Amazon Redshift, MySQL, Oracle, PostgreSQL, Snowflake, SQL Server.
Bulk Insert for Oracle and PostgreSQL
Bulk insert is now available for Oracle and PostgreSQL destinations, improving performance when sending large volumes of data.
Send Data Without Headers
A new option allows Data Streams to send data files without column headers, for compatibility with downstream systems that do not expect header rows.
Execute Multiple Data Streams at Once
Multiple Data Streams can now be triggered simultaneously using the new bulk run action, reducing the time needed to orchestrate large data delivery operations.
Connectors
Other Connector Updates
• Pipedrive: updated to the latest API version; select smartviews have a new version.
• Web Service: improved pagination support, including logic-based stop conditions between two fields; PATCH method now supports a request body; improved retry logic on 500/504 errors.
• File Storage connectors: Parquet timestamp fields can now be automatically converted to Date on import.
SmartView Enhancements
Various
Filters – NOT BETWEEN Condition
A new NOT BETWEEN filter operator is now available across the platform, enabling exclusion-range filtering without manual workarounds.


