I’m a Senior Data Engineer with 5+ years of experience designing and architecting scalable data platforms and ETL/ELT pipelines across cloud environments. I’ve worked in both management consulting and product-based companies, which has given me exposure to diverse problem statements—from translating business requirements into robust data architectures to building and operating reliable, production-grade systems.
My work focuses on end-to-end data architecture, including pipeline design, orchestration strategies, batch and streaming processing patterns, lakehouse design, and analytics-layer modeling. I emphasize architecture principles, design trade-offs, scalability, reliability, and cost efficiency over any single tool or platform.
People commonly reach out to me for designing and modernizing lakehouse solutions, especially using open table formats like Apache Iceberg, as well as for guidance on choosing the right storage, compute, and orchestration patterns for long-term scalability.
I’ve worked hands-on with a modern data stack including AWS and GCP, Airflow / Cloud Composer, dbt, Kafka, Spark, Apache Iceberg, and analytics platforms such as BigQuery and Snowflake. I also have experience leading migrations from legacy systems to modern cloud data stacks and optimizing pipelines for performance and cost.
Clients benefit from practical, tool-agnostic guidance with clear explanations and actionable next steps—whether they’re designing a new lakehouse platform, modernizing ETL/ELT pipelines, or making architecture decisions across teams