About Me#

I’m Adhi Nugroho, a data engineer with 4+ years of experience building robust pipelines and scalable analytics infrastructure. I create value using my skills in data modelling, software systems, automation and evidence driven decision making.

My work directly impacts: reduced operational costs, faster decision making, improved process efficiency and the foundation needed for growth. I turn complex data landscape into competitive advantage.

What I Do#

My expertise lies at the intersection of software engineering and data science. I design and implement end to end solutions that handle everything from initial ingestion, automated testing, infrastructure and final visualisation. All while ensuring quality and accessibility across organisations.

Core Specialisations:

  • Cloud architecture, primarily GCP
  • On-premise architecture
  • Real-time streaming pipelines with Apache Kafka and Spark
  • Data warehouse design and optimisation
  • ETL/ELT pipeline development using Python, SQL, dbt and Airflow
  • DevOps and infrastructure as code

My Journey#

My path into data engineering began during my Mechanical Engineering degree at Bristol, where I am heavily interested in the simulation side of things (FEA, CFD). So I decided to pursue my Master’s in a more software heavy course, when I discovered the fascinating challenge of making sense of massive datasets. What started as curiosity about energy forecasting problem evolved into a career focused on building core solutions that powers modern analytics.

I’ve had the privilege of working across different industries. From scaling data systems at a financial institution churning billions of transactions, to architecting and delivering for a giant telco, improving the efficiency of various critical processes.

Core Engineering Values#

I believe great engineering is invisible to end users but critical in impact. We are often the unsung operator of organisations. Thus, my approach rests on three core values:

Reliability First: Above all, data systems must produce correct data. They should also be bulletproof. I build with correctness and failure scenarios in mind, implementing monitoring, automated testing and error handling.

Scalability: My solution should handle tomorrow’s growth through reusability and optimised performance. I create systems that scale and adapt to changing business needs without requiring complete rewrites.

Collaboration: The best data infrastructure must serves both technical and business stakeholders. I prioritise clear documentation and strong stakeholder communication, bridging the two functions together.

Beyond the Role#

When I’m not improving query performance or improving pipeline failures, I care about mentoring junior engineers, exploring process opportunities, exploring engineering direction and regularly attend data/software expos.

I’m an avid football player. I primarily play defensive midfielder which mirrors the systematic thinking and critical (often abstracted) responsibilities required in data engineering. Both require hard work, anticipation, and creative solutions to big challenges.

Let’s Connect#

I’m always excited to discuss interesting challenges, whether you’re looking to build something new or optimise existing systems. Please feel free to reach out :)