Flink-Iceberg-Connector Write Process

October 10, 2022 · 1056 words · 5 min · Big Data Lake House Stream Compute Storage

The Iceberg community provides an official Flink Connector, and this chapter’s source code analysis is based on that. Overview of the Write Submission Process Flink writes data through RowData -> distributeStream -> WriterStream -> CommitterStream. Before data is committed, it is stored as intermediate files, which become visible to the system after being committed (through writing manifest, snapshot, and metadata files). private <T> DataStreamSink<T> chainIcebergOperators() { Preconditions.checkArgument(inputCreator != null, "Please use forRowData() or forMapperOutputType() to initialize the input DataStream.

Apache-ORC Quick Investigation

October 5, 2022 · 565 words · 3 min · Column Store Big Data Storage

Iceberg supports both ORC and Parquet columnar formats. Compared to Parquet, ORC offers advantages in query performance and ACID support. Considering the future data lakehouse requirements for query performance and ACID compliance, we are researching ORC to support a future demo involving Flink, Iceberg, and ORC. Research Focus: ORC file encoding, file organization, and indexing support. File Layout An ORC file can be divided into three main sections: Header: Identifies the file type.

Apache-Iceberg Quick Investigation

October 5, 2022 · 1208 words · 6 min · Lake House Storage Big Data

A table format for large-scale analysis of datasets. A specification for organizing data files and metadata files. A schema semantic abstraction between storage and computation. Developed and open-sourced by Netflix to enhance scalability, reliability, and usability. Background Issues encountered when migrating HIVE to the cloud: Dependency on List and Rename semantics makes it impossible to replace HDFS with cheaper OSS. Scalability issues: Schema information in Hive is centrally stored in metastore, which can become a performance bottleneck.