In the 12th and final post of the series, I release the open-source repository that implements the LinkedIn analytics pipelines, and discuss future plans.

In the 11th and penultimate post of the series, I look back on what has been achieved, what can be done better and what has been learned.

In the 10th post of the series, I show how to set up observability on our data pipeline to monitor its condition and act as necessary.

In the 9th post of the series, I use a combination of Git and Databricks Asset Bundles to make the data pipeline easily deployable and maintainable.

In the 8th post of the series, I convert the scattered pieces of data ingestion, processing and dashboarding into an orchestrated and automated data pipeline.

In the 7th post of the series, I build a dashbord on Databricks for the ingested LinkedIn data.

In the 6h post of the series, I explore the approaches to modelling the LinkedIn data in the gold layer.

In the 4th article in the series, I deep dive into the ingestion process into bronze layer and highlight relevant industry practices along the way.

In this 2nd article in a LinkedIn analytics data product series, I will examine what LinkedIn data to ingest and where to ingest it from.

In this first article in a series, I will examine what already exists out there for personal LinkedIn analytics, then explain why I decided to build my own analytics platform on Databricks Free Edition. Subsequent articles will explore how I implement the analytics plaform in a step-by-step fashion.