Migrating the Mammoth
Proposed session for SQLBits 2026TL; DR
Based on a real enterprise case, this session details how a world-leading tech company modernized a massive, highly fragmented data platform—migrating RC, ORC, Parquet, and Avro datasets to Delta Lake as part of a multi-petabyte transformation. Attendees will learn how the team designed the architecture, automation framework, and migration tooling needed to orchestrate thousands of pipelines, validate data at scale, and handle complex edge cases such as multi-format tables, massive (>50 TB) datasets, and long-running legacy workloads. Discover the patterns, challenges, and hard-earned lessons required to execute a migration of this magnitude successfully.
Session Details
This session is grounded in the real-world experience of a global technology company undertaking one of the largest data-platform migrations in industry—modernizing its fragmented data estate and moving to Delta Lake at extraordinary scale. Managing more than 60 PB of data spread across four different formats, 750+ ingestion pipelines, and over 5,000 ETL jobs, the organization needed a migration strategy engineered for both massive volume and operational complexity.
We will explore how the team executed this transformation using a structured “divide and conquer” model, separating the work into targeted migration workstreams and aligning them to OKRs. As tooling sophistication grew, migration velocity increased, supported by an agile delivery model and a dedicated “Migration Machine” designed to validate data, orchestrate dependency-heavy workloads, and provide complete visibility through custom dashboards.
Attendees will gain insight into the real challenges encountered at scale, including migrating 50+ TB tables, dealing with multiple source formats, orchestrating batch backfills, validating large datasets, and designing alternate paths for tables too slow or complex for standard migration flows.
We will conclude with the tangible outcomes: up to 1600× improvement on highly selective queries for petabyte-scale tables, significant gains in table-read performance, and major acceleration of BI workloads—results internal teams described as truly “game changing”.
This session offers a deeply practical look at what it takes to migrate a “mammoth” data estate to a modern lakehouse architecture, providing actionable lessons for organizations planning or scaling their own modernization efforts.
We will explore how the team executed this transformation using a structured “divide and conquer” model, separating the work into targeted migration workstreams and aligning them to OKRs. As tooling sophistication grew, migration velocity increased, supported by an agile delivery model and a dedicated “Migration Machine” designed to validate data, orchestrate dependency-heavy workloads, and provide complete visibility through custom dashboards.
Attendees will gain insight into the real challenges encountered at scale, including migrating 50+ TB tables, dealing with multiple source formats, orchestrating batch backfills, validating large datasets, and designing alternate paths for tables too slow or complex for standard migration flows.
We will conclude with the tangible outcomes: up to 1600× improvement on highly selective queries for petabyte-scale tables, significant gains in table-read performance, and major acceleration of BI workloads—results internal teams described as truly “game changing”.
This session offers a deeply practical look at what it takes to migrate a “mammoth” data estate to a modern lakehouse architecture, providing actionable lessons for organizations planning or scaling their own modernization efforts.
3 things you'll get out of this session
1. Understand how to design and execute a large-scale, multi-format data migration—including strategies for converting RC, ORC, Parquet, and Avro datasets into Delta Lake while maintaining data integrity, lineage, and operational continuity.
2. Learn the architectural patterns, automation frameworks, and tooling required to orchestrate thousands of ingestion and transformation pipelines during a multi-petabyte modernization effort, with a focus on scalability, validation, and reliability.
3. Identify common edge cases and technical challenges encountered during massive enterprise migrations—such as handling 50+ TB tables, mixed-format estates, legacy dependencies, and long-running workloads—and understand the practical solutions that enabled successful outcomes.