Sunday, March 8, 2026

𝗙𝗼𝗼𝗱 𝗳𝗼𝗿 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 - 𝗦𝗮𝗻𝗱𝘆’𝘀 𝘁𝗮𝗸𝗲 𝗼𝗻 𝗟𝗟𝗠 - 𝗣𝗮𝗿𝘁 𝟮

If your near-and-dear one was having a health issue, who would you go to?

  • 𝘛𝘩𝘦 𝘣𝘦𝘴𝘵 𝘈𝘨𝘦𝘯𝘵𝘪𝘤 𝘈𝘐 𝘋𝘰𝘤𝘵𝘰𝘳 𝘰𝘶𝘵 𝘵𝘩𝘦𝘳𝘦

  • 𝘈 "𝘷𝘪𝘣𝘦-𝘤𝘰𝘥𝘦𝘥" 𝘋𝘰𝘤𝘵𝘰𝘳

  • 𝘈𝘯 𝘦𝘹𝘱𝘦𝘳𝘵, 𝘢𝘤𝘵𝘶𝘢𝘭 𝘥𝘰𝘤𝘵𝘰𝘳

  • 𝘈𝘯 𝘢𝘤𝘵𝘶𝘢𝘭 𝘥𝘰𝘤𝘵𝘰𝘳 𝘸𝘪𝘵𝘩 𝘈𝘐 𝘵𝘰𝘰𝘭𝘴 𝘢𝘵 𝘵𝘩𝘦𝘪𝘳 𝘥𝘪𝘴𝘱𝘰𝘴𝘢𝘭

One could say this is an extreme case and perhaps not worth a comparison, but I want to drop this here as food for thought.

I work in the AI/ML domain and I see its potential. I am all for change and adoption, but I am not yet fully bought into a 100% replacement.

Checkout my article I wrote 2 year back on LLM -- Sandy’s take on LLM and RAG so Far


𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗗𝗲𝗯𝘁

I am somewhat starting to like the term "Technical Debt." Let me use myself and some of my colleagues as examples.

I have been using GitHub Copilot for almost six months now. In the early days, I would prompt it, refine it, and let it create entire repositories and solutions for me—my day-to-day tasks and everything else. Once in a while, things would go wrong, and it would take me days to fix them. Also, when I started looking closer, I realized there could be some unnecessary code blocks mixed in with some really smart ones.

After a learning curve, I now go function-by-function or block-by-block, and I have my own way of testing the accuracy of the outputs. The majority of my tasks involve processing large chunks of data and making inferences from them—sometimes processing and passing them to domain experts or top management for decision-making. In this case, I have to be triply sure of what I deliver, so I go step-by-step.

For sure, a week's worth of tasks can now be done in a day or two, and I can generate code that is much more scalable and reusable. The point is: I wouldn't let it run on "auto-pilot" for my entire task just yet.

𝗩𝗶𝗯𝗲 𝗖𝗼𝗱𝗶𝗻𝗴

"Vibe coding"—well, for sure, some have successfully done it. Having gone through it myself, I would classify those successes as 0.01% or even less. If you think about it, no matter the field or the task, you will always find some outliers—those who defy the norm. As for the majority (including myself), we either aren't sure what we are doing or need more practice with the tools.

I like to use the example of Excel a lot. Corporate employees know how to use Excel, but how much one achieves with it depends on their skills and the effort they took to master it. I remember in my Quantitative Finance course, I was doing heavy Python coding for some bond pricing, and the instructor—an expert—did it in Excel right then and there with us. It's the same with my cousin; in five minutes or so, he made an entire loan repayment and amortization sheet for me in Excel.

LLM tools and agents are getting much smarter and faster, but without knowing the basics of the task at hand, things might spiral out of control, and that "Technical Debt" would keep on growing.


𝗜𝘁 𝗶𝘀 𝗔𝗹𝗹 𝗔𝗯𝗼𝘂𝘁 𝗡𝗮𝗿𝗿𝗮𝘁𝗶𝘃𝗲𝘀

Apple, now Anthropic—even recent politics and whatnot—throughout history, it has always been about the narratives one sets and how fast people catch up to them. Yes, you then need a product to support it.

This reminds me of Freedom 251 (India). It was advertised as a smartphone for ₹251. It looked like a scam, but the narrative it set got tons of bookings based on that story alone.

𝗧𝗵𝗲 𝗘𝗰𝗼𝗻𝗼𝗺𝗶𝗰 𝗟𝗼𝗼𝗽

I read this somewhere—imagine AI and robots can do everything and take over most jobs. If people don't have jobs, they have no income to buy stuff—both essentials and non-essentials. If that happens, who will these AI companies sell to? Who will buy the robots?

It is said that an equilibrium will be reached, but one cannot expect everything to go "all-in" in an instant. The world runs on consumerism.

Lastly, in one of his interviews, I heard Jamie Dimon, JP Morgan Chase CEO, saying (based on what I recollect):

We have autonomous driving—does that mean you take 2 million drivers out of work and the next best job they have pays only $25,000 a year? No, you can do it gradually, or have the government pitch in to say, "No, you can't do that," or "Let us do it sensibly."


Anyway, don’t get me wrong—I am all for AI, the change, and the new ways of working, as well as the new skills and job openings that will be brought about by it.

Thinking that AI can do it all? I am still not sold on it.

Tuesday, March 3, 2026

Whose Fault is it ?

Have been thinking for few days on writing this and stumbled on one Kosuke Takeuchi post - about a tool he developed to generate traffic scenarios - Thanks a Ton !





Now - First we see Cyclist and Car minding their own business and going in proper lanes and suddenly a HERO comes in motorcycle from wrong way - or as the biker says - 'MY WAY or the HIGHWAY'.

Then I play out just two of the many scenarios and what I want to know is - whose Fault is it ??


1. Cyclist - for riding cycles on Indian Roads ?
2. Cyclist - for riding on roads and in correct direction - may be cyclist should have used footPaths - if bikers can do it - why not our cyclist ?
3. Cyclist - for not driving in the right most lane ?

3. The Car - like the driver should have anticipated and left space for the cyclist ?
4. The Car - for whatever reason..

5. The Government/RTO - for not able to reinforce the traffic rules effectively ?

6. No One to Blame -- Like it is just another day in world - or city traffic - especially India or Bangalore !

Now - all yu experts and noobs out there - who would yu put a blame on if yu were the judge dealing with this case..


Linkedin post -- drawtonomy_linkedIn post
Tool Link -- https://www.drawtonomy.com/

Saturday, January 3, 2026

Discovering FastF1: Telemetry, Tyre Strategy, and My First Steps with MCP

 I stumbled upon FastF1 only recently — and honestly, I’m still surprised I missed it for so long.

FastF1 is a Python library that exposes an incredible amount of Formula 1 data across an entire race weekend: practice, qualifying, and race sessions. Not just results and lap times, but also detailed telemetry — speed traces, throttle application, braking, tyre stints, compound usage, and much more.

It turns out FastF1 has been around for a while. But as they say, better late than never.

As someone who loves both Formula 1 and building things, discovering FastF1 immediately opened up a flood of ideas.


Why FastF1 feels special

What makes FastF1 exciting is not just the data volume, but the granularity.

Instead of asking:

  • “Who was faster?”

You can start asking:

  • Where was a driver faster?

  • Why did a lap work?

  • How tyre choices shaped race outcomes

  • How two qualifying laps differ by just hundredths of a second

This moves F1 analysis away from headlines and into cause-and-effect.




Enter MCP: learning by building

Around the same time, I had been reading about MCP (Model Context Protocol) and wanted to understand it beyond theory. MCP, at a high level, is about exposing structured tools and data in a way that agents (or other clients) can call reliably.

Rather than learning MCP in isolation, I decided to combine both interests:

  • learn MCP properly

  • apply it to something I genuinely enjoy — Formula 1

So I started building a small F1 MCP server, backed by FastF1 data.

For now, this is very much a learning project — not a product — but it’s already been surprisingly rewarding.


The first two tools I built

At the moment, I’ve implemented just two core functions, keeping things intentionally simple.

1. Tyre strategy visualisation

The first tool generates a tyre strategy timeline for a given race, showing:

  • which compounds each driver used

  • how long each stint lasted

  • how strategies differed across the field

This makes race strategy immediately visual. Instead of reading pit-stop summaries, you can see how races unfolded strategically.

image shows tyre strategy for each drive — along with it — in github copilot chat — it throws more insights into the tyre strategy across the race




2. Qualifying lap telemetry comparison

The second tool focuses on qualifying, comparing telemetry from the top drivers’ fastest laps.

It plots:

  • speed vs distance

  • throttle application

  • brake application

Side by side, this reveals exactly where time was gained or lost — often in places that don’t show up in sector times alone.

Yu can compare segment by segment the driving style of top 3 drivers and in the chat window it throws details about remaining 7 drivers — ie top 10 qualifing results



Why MCP fits nicely here

Wrapping these analyses as MCP tools felt natural.

Instead of scripts that only I run locally, MCP encourages thinking in terms of:

  • clear inputs (season, race, session, drivers)

  • predictable outputs (tables, plots, structured data)

This also opens the door to multiple interfaces later:

  • CLI tools

  • dashboards (Streamlit / web)

  • or even AI-driven queries on top of the same data

For now, though, the goal is simple: learn MCP by doing, not by reading specs.


What’s next

There’s a lot more I want to explore:

  • combining telemetry with race notes, penalties, and regulations

  • richer driver-to-driver comparisons

  • experimenting with live data once the 2026 season starts

  • exposing more race-weekend concepts as structured MCP tools

I’ll keep this project intentionally lightweight and exploratory.

If there’s a specific race, driver comparison, or kind of plot you’d like to see, feel free to suggest it — I’ll be iterating on this over the coming weeks purely for learning and fun.

GitHub link coming soon once things settle a bit.