Episode 2: My ML Journey Takes a Turn
Weekly Highlights
- Machine learning course
- Freelance Project Update
- Events
- People who inspired me this week
Machine Learning Course
I just wrapped up modules 3 and 4 of the Google Cloud Machine Learning Engineer Certification, and honestly? I'm buzzing with excitement.
Module 3: Feature Engineering was a real eye-opener. I dove deep into Feature Store and discovered how to make features truly scalable and reusable—game-changers for any ML project. The journey from raw, messy data to polished features taught me so much. Learning what separates a stellar feature from a mediocre one? Pure gold. And mastering categorical-to-numerical conversion, especially one-hot encoding, made everything click into place.
Module 4: Machine Learning for Enterprise was my "aha!" moment. MLOps finally made sense—I could visualize how ML actually flows from concept to production in the cloud. The hands-on labs were brilliantly designed, with real implementations that I could actually follow and understand (thank goodness for clear documentation!).
Here's what's hitting me: I'm 60% through this entire certification, and I feel incredible. Those dedicated 3 hours every morning—sometimes when I'd rather hit snooze—are paying off in ways I never imagined. There's something powerful about watching your persistence transform into genuine expertise.
Next week brings Production Machine Learning Systems, and I can't wait to dive in.
Freelance Project Update
I've made some serious headway on my project, and honestly, it's been a wild ride.
In my last blog, I mentioned diving into a Google Ads course on Coursera to sharpen my understanding. I finished the first comprehensive series, which was incredibly valuable, but then I made a deliberate choice to stop before moving on to "Bidding and Budget." Here's why:
- I didn't want to fall into tutorial hell
- It was time to get my hands dirty with real work
- I was itching to get back to actually building something
The Reality Check
When I'd paused the project earlier, I had built some ML models for customer segmentation using k-means clustering on Google Ads data. But here's the thing—the dataset was so limited that I hit a wall pretty quickly. After bouncing ideas off a few people in the field, I got some much-needed clarity about what kind of projects actually make you stand out in this competitive market.
I realized something important: even with my data analysis background and my push toward data science, I needed to pick a lane. Two domains kept coming up—marketing/advertising and product optimization. Both fascinate me, but I'm still torn between them. For now, I'm going with marketing because the numbers are compelling: companies pour massive budgets into reaching the right customers, and pairing that with data science skills? That's a solid combination.
Finding My Project Direction
This was just the brainstorming phase though. I asked DeepSeek for project recommendations, and one caught my attention immediately: a Multi-Touch Attribution Model for multiple channels. The goal? Allocate marketing budget based on attribution scores for each touchpoint in the customer journey.
This hit different because I genuinely wanted to understand this stuff myself.
The Technical Deep Dive
I started by building a user_journey table from Google Ads data, pulling in clicks, user info, costs, conversions, and referring sites. Then I aggregated everything to show one complete customer journey per row, while keeping individual touchpoints using array aggregation. I used Shapley calculations to assign attribution scores to each touchpoint. This was version 1.0—my plan is to gradually add more meaningful features. This was the backend of the backend (if you know, you know).
Frontend Reality Check
Now came the part that nearly broke me: building the frontend. I needed a React application with two components—the user-facing frontend and the backend connecting to BigQuery.
The frontend? Surprisingly smooth. But getting BigQuery data to display in the React app? Eight hours of pure frustration. I started with general DeepSeek chat, but when things got messy, I installed Aider and used the code assistant directly through my terminal. That debugging process was what finally got my BigQuery results showing up in the app. Total victory moment.
Useful Resources I Created
If you're tackling similar challenges, here are some guides I put together:
- How to connect a subfolder of a GitHub repo (not the root directory) to Vercel —I figured this out because I want to host different tools from the same main repo on separate subdomains
- Guide on storing user email addresses from your website to cloud storage
- My detailed journey working with virtual environments
The Production Mindset Shift
One major learning: I had to completely change my approach from "building locally" to "making it production-ready." I needed to package my BigQuery backend in a virtual environment to keep dependencies consistent when deploying. This was completely new territory for me, but crucial.
Events
This week was all about getting out there and meeting people who are actually building things.
First up: dlthub and LanceDB Meetup The energy at this event was incredible. I'm always amazed by what people are working on when you actually sit down and talk to them. I connected with a young startup tech lead who's solving problems I hadn't even thought about, a tech expert from Volkswagen Group working on some fascinating large-scale projects, and a bunch of other folks from companies ranging from scrappy startups to major corporations. There's something energizing about swapping stories and challenges with people who get it.
Second: Knowledge Graph & AI Memory Event Another solid gathering of tech people doing meaningful work. The focus on content engineering and AI memory systems opened up some interesting conversations about where the industry is headed. Again, just great to bounce ideas around with people who are deep in the trenches.
These events remind me why stepping away from the screen matters—you can't replicate that kind of idea exchange online.
People who inspired me this week
- I Tried Tom Cruise's Deadliest Stunt - This is hands-down one of the most insane things I've seen on YouTube. She literally put her life on the line to make this happen. I had goosebumps the entire time. The commitment level is just unreal.
- How to Use AI to Find a $1M Idea [Reddit, Claude] This is pure gold for anyone trying to brainstorm systematically. The approach to finding product-market fit is methodical and actually actionable.
- Stanford's Practical Guide to 10x Your AI Productivity | Jeremy Utley - Finally, someone talking about LLM prompting in a way that actually improves results. The practical tips here are game-changing.
Wrapping Up
Looking back at this week, I'm struck by how much momentum I'm building across different fronts. The ML certification is clicking into place, the freelance project is becoming something real (despite the 8-hour debugging sessions), and connecting with other people working on hard problems is keeping me motivated.
The shift from "learning about" to "actually building" has been challenging but incredibly rewarding. Every small win—whether it's getting BigQuery data to show up in React or finally understanding MLOps workflows—feels like a step toward something bigger.
Next week brings Production ML Systems and more work on the attribution model. The journey keeps getting more interesting.