Originally published on Medium on 19 February 2026 In the 1980s, the big bet was chips. Singapore set out to become a serious player in semiconductors and electronics, starting from low‑cost assembly and eventually becoming a key node in the global chip …

Chips, Cells and Code: How Singapore is Applying its 40-Year Industrial Playbook to AI Read more »

In the 12th and final post of the series, I release the open-source repository that implements the LinkedIn analytics pipelines, and discuss future plans.

In the 11th and penultimate post of the series, I look back on what has been achieved, what can be done better and what has been learned.

In the 10th post of the series, I show how to set up observability on our data pipeline to monitor its condition and act as necessary.

In the 9th post of the series, I use a combination of Git and Databricks Asset Bundles to make the data pipeline easily deployable and maintainable.

In the 8th post of the series, I convert the scattered pieces of data ingestion, processing and dashboarding into an orchestrated and automated data pipeline.

In the 7th post of the series, I build a dashbord on Databricks for the ingested LinkedIn data.

In the 6h post of the series, I explore the approaches to modelling the LinkedIn data in the gold layer.

In the 5th article in the series, I cover the process of cleaning and transforming the LinkedIn data into a Single Source of Truth (SSOT) in the silver layer.

In the 4th article in the series, I deep dive into the ingestion process into bronze layer and highlight relevant industry practices along the way.