Skip to main content

Iceberg

caution

Cloud version is still in development iterations. DO NOT use for production purposes.

This page guides you through the process of setting up the Iceberg destination connector.

Sync overview

Output schema

The incoming airbyte data is structured in keyspaces and tables and is partitioned and replicated across different nodes in the cluster. This connector maps an incoming stream to an Iceberg table and a namespace to an Iceberg database. Fields in the airbyte message become different columns in the Iceberg tables. Each table will contain the following columns.

  • _airbyte_ab_id: A random generated uuid.
  • _airbyte_emitted_at: a timestamp representing when the event was received from the data source.
  • _airbyte_data: a json text representing the extracted data.

Features

This section should contain a table with the following format:

FeatureSupported?(Yes/No)Notes
Full Refresh Sync
Incremental Sync
Replicate Incremental Deletes
SSH Tunnel Support

Performance considerations

Every ten thousand pieces of incoming airbyte data in a stream ————we call it a batch, would produce one data file( Parquet/Avro) in an Iceberg table. This batch size can be configurabled by Data file flushing batch size property. As the quantity of Iceberg data files grows, it causes an unnecessary amount of metadata and less efficient queries from file open costs. Iceberg provides data file compaction action to improve this case, you can read more about compaction HERE. This connector also provides auto compact action when stream closes, by Auto compact data files property. Any you can specify the target size of compacted Iceberg data file.

Getting started

Requirements

  • Iceberg catalog : Iceberg uses catalog to manage tables. this connector already supports:
    • HiveCatalog connects to a Hive metastore to keep track of Iceberg tables.
    • HadoopCatalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename. For HadoopCatalog, this connector use Storage Config (S3 or HDFS) to manage Iceberg tables.
    • JdbcCatalog uses a table in a relational database to manage Iceberg tables through JDBC. So far, this connector supports PostgreSQL only.
    • RESTCatalog connects to a REST server, which manages Iceberg tables.
    • GlueCatalog
  • Storage medium means where Iceberg data files storages in. So far, this connector supports S3/S3N/S3N object-storage. When using the RESTCatalog, it is possible to have storage be managed by the server.

Changelog

Expand to review
VersionDatePull RequestSubject
0.2.22024-09-2345861Keeping only S3 with Glue Catalog as config option
0.2.12024-09-2045711Initial Cloud version for registry purpose [UNTESTED ON CLOUD]
0.2.02024-09-2045707Add support for AWS Glue Catalog
0.1.82024-09-1645206Fixing tests to work in airbyte-ci
0.1.72024-05-1738283Bump Iceberg library to 1.5.2 and Spark to 3.5.1
0.1.62024-04-04#36846Remove duplicate S3 Region
0.1.52024-01-03#33924Add new ap-southeast-3 AWS region
0.1.42023-07-2028506Support server-managed storage config
0.1.32023-07-1228158Bump Iceberg library to 1.3.0 and add REST catalog support
0.1.22023-07-1428345Trigger rebuild of image
0.1.12023-02-2723201Bump Iceberg library to 1.1.0
0.1.02022-11-0118836Initial Commit