I stumbled upon FastF1 only recently — and honestly, I’m still surprised I missed it for so long.
FastF1 is a Python library that exposes an incredible amount of Formula 1 data across an entire race weekend: practice, qualifying, and race sessions. Not just results and lap times, but also detailed telemetry — speed traces, throttle application, braking, tyre stints, compound usage, and much more.
It turns out FastF1 has been around for a while. But as they say, better late than never.
As someone who loves both Formula 1 and building things, discovering FastF1 immediately opened up a flood of ideas.
Why FastF1 feels special
What makes FastF1 exciting is not just the data volume, but the granularity.
Instead of asking:
-
“Who was faster?”
You can start asking:
-
Where was a driver faster?
-
Why did a lap work?
-
How tyre choices shaped race outcomes
-
How two qualifying laps differ by just hundredths of a second
This moves F1 analysis away from headlines and into cause-and-effect.
Enter MCP: learning by building
Around the same time, I had been reading about MCP (Model Context Protocol) and wanted to understand it beyond theory. MCP, at a high level, is about exposing structured tools and data in a way that agents (or other clients) can call reliably.
Rather than learning MCP in isolation, I decided to combine both interests:
-
learn MCP properly
-
apply it to something I genuinely enjoy — Formula 1
So I started building a small F1 MCP server, backed by FastF1 data.
For now, this is very much a learning project — not a product — but it’s already been surprisingly rewarding.
The first two tools I built
At the moment, I’ve implemented just two core functions, keeping things intentionally simple.
1. Tyre strategy visualisation
The first tool generates a tyre strategy timeline for a given race, showing:
-
which compounds each driver used
-
how long each stint lasted
-
how strategies differed across the field
This makes race strategy immediately visual. Instead of reading pit-stop summaries, you can see how races unfolded strategically.
image shows tyre strategy for each drive — along with it — in github copilot chat — it throws more insights into the tyre strategy across the race |
2. Qualifying lap telemetry comparison
The second tool focuses on qualifying, comparing telemetry from the top drivers’ fastest laps.
It plots:
-
speed vs distance
-
throttle application
-
brake application
Side by side, this reveals exactly where time was gained or lost — often in places that don’t show up in sector times alone.
Yu can compare segment by segment the driving style of top 3 drivers and in the chat window it throws details about remaining 7 drivers — ie top 10 qualifing results |
Why MCP fits nicely here
Wrapping these analyses as MCP tools felt natural.
Instead of scripts that only I run locally, MCP encourages thinking in terms of:
-
clear inputs (season, race, session, drivers)
-
predictable outputs (tables, plots, structured data)
This also opens the door to multiple interfaces later:
-
CLI tools
-
dashboards (Streamlit / web)
-
or even AI-driven queries on top of the same data
For now, though, the goal is simple: learn MCP by doing, not by reading specs.
What’s next
There’s a lot more I want to explore:
-
combining telemetry with race notes, penalties, and regulations
-
richer driver-to-driver comparisons
-
experimenting with live data once the 2026 season starts
-
exposing more race-weekend concepts as structured MCP tools
I’ll keep this project intentionally lightweight and exploratory.
If there’s a specific race, driver comparison, or kind of plot you’d like to see, feel free to suggest it — I’ll be iterating on this over the coming weeks purely for learning and fun.
GitHub link coming soon once things settle a bit.