
TL;DR: We’re launching a newsletter where we interview community members to get a monthly pulse on what’s going on in data. This month, it's all about how folks are using AI to automate and speed up their data work.
We’re still buzzing from our incredible MDS Fest in May. Thousands of you showed up, and the conversations haven't stopped. That's why we're launching this newsletter: Community Voices.
For our inaugural issue, we caught up with four MDS Fest community members about AI. Not the hype - the practical stuff that they’re actually experimenting with.
Our findings? Dylan Watt (Founder, Kadre.io), Martin Brummerstedt (Head of BI, Miinto), Rakhi Reddy (Analytics Engineer, GitLab), and Thais Cooke (Healthcare Data Analyst) have all figured out something bigger: small, systematic AI changes that create massive multiplier effects.
We're talking 6x faster deployment cycles, eliminating cognitive bottlenecks, and learning curves compressed from months to weeks.
Coffee/Tea/Celsius ready? Let's get into it.
COMMUNITY PERSPECTIVES
.png)
THE MULTIPLIER EFFECT: SMALL AI CHANGES, BIG IMPACT
We don't manually document tables anymore
Martin (Head of BI, Miinto) solved documentation hell: "We launched an automated AI documentation writer through Secoda AI. When it finds a new table that hasn't been documented yet, it does research about what the table does and how it fits into the whole system, then writes up documentation automatically."
The results speak for themselves: "It's reached thousands of tables and columns. Our team no longer manually documents tables - that has completely gone away. Soon we'll expand it to do all of the semantic models in PowerBI and then later extend it to reports.”
Martin also shared how Miinto experiments with AI: "Usually we do a small project and see success with that, then we continue on the path. We also had 5 or 6 other projects that didn't work well that we just killed off quite quickly. Some of them we revisit later again when there's new underlying models."
Building skills in weeks instead of months
Thais (Healthcare Data Analyst) needed to learn Python and DuckDB fast for her evolving role: "This would literally take me months to learn if not a year. But AI has helped me - things that would take a while for me to understand made sense in a few weeks."
Her approach: "I use a mix of Claude, ChatGPT, Copilot and compare the answers. When I hit a wall, I ask 'Can you explain this to me in a way that I can understand?' and it gives me that answer that clicks."
Thais isn’t just learning faster, she’s learning how to make a system for rapid skill acquisition for any new technology that comes her way.
.png)
Redesigning workflows, not just adding AI
Dylan (Founder, Kadre.io) was getting frustrated with a recurring client bottleneck: "Setting up dbt sources used to be either by hand or using a dbt code gen macro. This process was fiddly, and required a lot of cycles checking what's in the new source, mapping columns, adding descriptions."
Instead of just throwing AI at the problem, Dylan redesigned his approach: "Now, with a tooled dbt codebase for AI, you can just prompt 'We have a new source at DATABASE.SCHEMA, add it as a source, leave out Fivetran/Airbyte specific columns, but map an ingest timestamp to _ingested_at.' What used to be a couple of hours becomes 10-20 minutes of review."
The result? Dylan can iterate on client infrastructure 6x faster. Small system change, massive time savings.
Eliminating analysis paralysis
Rakhi (Analytics Engineer, GitLab) noticed AI was making her faster at work - using GitLab Duo Chat and Claude for documentation, analyzing issues, and refactoring code at scale. "For example if you have multiple if statements in a huge query, just being able to modify that in seconds to use a function instead - that is way faster," she explained.
But she spotted something deeper: "I would say it definitely reduces the starting trouble that I used to have before like 'Where do I start? What do I do?' It gives me the structure to just begin, and gives me that critical eye when looking at new business problems."
The speed boost was nice, but eliminating analysis paralysis was the real win. FYI, Rakhi did a whole talk at MDS Fest on her personal AI workflows as an Analytics Engineer, catch up on it here.
AVOIDING THE TRAPS
Don't skip the fundamentals
Martin: "Don't give AI too big a task. It's similar to giving a task to a person - you want to give one specific task they can complete, not an unclear goal, like 'go optimize everything at once’.'"
Thais: "Don't just copy and paste the results and hope for the best. I had my SQL skills and fundamentals really well in place when I started learning Python with AI. I can feel the difference - with SQL I can read the output and know if it's right or completely wrong. With Python, since I'm still learning, it takes me longer to verify."
Dylan: "It’s common for teams to give up on AI after one bad experience. You need to provide proper context and set things up correctly rather than dismissing AI as useless. I optimize against failure rather than success - preventing critical failures is more important than individual success rates."
WHAT TO TRY NEXT
Start here, then build up
Thais: "If you don't feel secure to really experiment too much in your work, start with points where you can delegate to AI. For instance, documentation. Use AI to help - 'Hey, here is my code, this is what I'm accomplishing, do the documentation for me.’ It’s a safe place to start."
Martin: "I think everybody needs to play around with MCP servers for connecting different tools. I have access to Asana, Jira and my calendar - it can see what's in my calendar and add things to it. It's like having it take over parts of your workflow."
Dylan: "Currently Claude Code is far and away the best agent. If you try Cursor and it doesn’t fit the bill, give Claude Code a shot. Just ask 'Explain this project and one thing I could do to improve it' and watch it think. Working with AI is about getting the reps in to learn a new way of working, not just a new tool.”
Rakhi: "Focus on your own specific workflows rather than getting overwhelmed by what everyone else is doing. There's so much noise on LinkedIn - every second person is doing something cool with AI. Focus on your own journey and take the first step."
WORTH YOUR TIME
🔧 Tools to try: Claude Code, Dylan's top pick for AI agent; Secoda AI, Martin’s documentation weapon
👀 Dylan wants you to watch: Andrej Karpathy’s talk on “Software 3.0,” when natural language becomes the new programming interface and models do the rest
⚡ Martin’s custom AI podcasts: NotebookLM creates podcasts on any topic you desire from articles and videos - Martin made a personalized podcast on Bayesian statistics
📖 Thais’s beach read: Fundamentals of Data Engineering by Joe Reis & Matt Housley
🎧 Raki’s favourite industry podcast: The Analytics Engineering Podcast by dbt founder Tristan Handy
... and that's all for this first issue of Community Voices! Catch us next month where we'll be covering Agentic AI. See you then!