dbt Coalesce 2023 Day 2 & 3 Sessions Recap

A deeper dive into the session content from day 2 & 3 of dbt Coalesce 2023. Learn more here.
Last updated
May 2, 2024
Author

Wednesday (Day 2) Recap

After grabbing some breakfast on the rooftop terrace, I, like many others (the room was packed once again), started off day 2 of Coalesce at the main stage for the “second keynote”, dbt Labs on dbt: An executive perspective. This was a really interesting session, where different folks from dbt broke down how they use the tool internally to solve various data challenges. Daniel Le, CFO at dbt Labs, walked through how data, finance, and operations teams can collaborate more effectively, with examples from their own team. He emphasized how many businesses are now feeling the pressure of operating in a post-ZIRP economy, where more focus is being placed on finding efficiencies, making sustainable growth, and being profitable, rather than the “growth at all costs” world many businesses were living in for the past decade or so. With these changes, Daniel shared some recommendations for how data teams can thrive instead of just survive these new macroeconomic conditions:

  1. Help businesses drive profitability to be seen as value drivers rather than cost centres (by getting closer to your stakeholders and and helping solve their problems)
  2. Drive more visibility for data and reduce friction to insight (with things like the Semantic Layer)
  3. Operate as efficiently as possible by eliminating low-value work and boosting delivery velocity (with better DataOps)

Next, Daniel compared a typical Finance Ops ecosystem (complex and fragmented, with many manual processes) with a dbt centric Finance Ops ecosystem:

While he didn’t dive into the details of the dbt centric ecosystem, or how this works specifically, it appears to suggest that users from Finance and Operations teams would be working in dbt as well (*which still feels…scary*).

Next, up was Brandon Thomson, Analytics Lead at dbt Labs, who shared how his team built a Campaign 360 data product that enabled their marketing team to self-serve reporting and reduced the number of ad-hoc questions the data team was getting (to seemingly, zero?). Brandon shared some of the shortcuts they took, including Fivetran transformation packages, to quickly build this data product. This was a nice overview, but admittedly felt a little bit like table stakes for most data teams who have been using dbt for a while…

The next portion of the talk was focused on how dbt reduced their Snowflake bill without refactoring any code. This was a pretty high-level part of the presentation, so unfortunately, I didn’t feel like too many useful insights were shared.

Finally, they closed off the second keynote by diving deeper into how the Finance and Ops teams have actually built the dbt-centric ecosystem that was mentioned earlier (it kinda felt like this part should have followed Daniel’s presentation…), to help automate workforce planning and revenue recognition processes. Sarah Riley, VP of Finance and Strategy, walked through how they improved both processes with dbt, and confirmed that, yes, folks from both Finance and Ops were working in dbt to achieve this. Again, the details here were a little handwavy, but it sounded nice in theory…

After the second keynote, I hopped into the session by our very own Etai Mizrahi (CEO at Secoda) titled A spoonful of metadata helps the data sprawl go down. While I’m probably a bit biased, I think more data teams should be focused on the value that metadata can provide. In my experience, so many data teams are ‘flying blind’ when it comes to understanding critical aspects of how they’re running their team and infrastructure, like how many total data assets they have, which ones aren’t being used and should be deprecated, how asset growth is changing over time, the health and cost of specific pipelines, and many more details. We as data teams spend so much time thinking about optimizing other functions of the business, but we rarely stop to look at how we can leverage data (specifically, metadata) to do this for ourselves. I encourage you to check out Etai’s talk, where he gets into more specifics about the types of metadata available, and how Secoda is thinking about helping data teams to solve these blind spots.

After grabbing some lunch, I headed down to the 3rd floor to get ready for my own talk–a panel discussion titled Is AI the new AE? I was grateful to be welcomed to join this panel with Kate Schiffelbein (Head of Business Intelligence at Northbeam) and Patrick Ross (Solution Architect at Data Clymer), where we dove into some discussion and questions about whether generated AI tools like ChatGPT and others could feasibly replace the role of Analytics Engineers. *Spoiler alert*–we don’t think any AEs will be replaced by AI anytime soon, as the role is increasingly complex and ever-changing, but AEs should be aware of the tooling out there and available to them to boost their efficiency and productivity (as with any modern business role). The discussion also focused on how AEs may need to think differently about their work (for example, modelling data in different ways to get the most out of generative-AI-powered text to SQL-type tooling), and how the arrival of these tools may change these roles over time. Emphasis was put on the fact that, if you’re doing a lot of repetitive, low-value work (hopefully no one is…) then you are probably going to find that generative AI tooling may replace some of these skills, but overall, if AEs are focused on sharpening their skills and focusing on more advanced, higher-value work, then generative AI tooling presents more opportunities than threats.

Overall, this was a really fun panel to be a part of, and I was happy for the opportunity to be a speaker at the conference again this year (although, maybe I should consider wearing a blazer next year 😆)

Sporting the Secoda uniform

After the panel, I hung around the 3rd floor for a bit longer to catch Hex’s session called: AI Dashboard Karaoke–except it was a bit of a trick, in that they changed their session at the last minute to actually be titled “The Magic Behind the Magic”. This hoodwink was quickly forgiven because Izzy and Matt took us through a really cool “behind the scenes” look at how Hex has been working to optimize their generative AI tooling. This was a really entertaining presentation, and I also felt like I learned a lot about prompt engineering. I highly recommend checking it out!

Aaaaand by that time, the puppies were back at our Secoda booth, so I spent the rest of the day fawning over them. I managed to make it to three live sessions again on day 2, but here are a few more I’m hoping to catch up on soon:

Needless to say, I have a few hours of catching up to do 😅 🧠

Thursday (Day 3) Recap

I headed home to Toronto pretty early on Thursday morning ✈️, so unfortunately, I didn’t get a chance to hit any sessions live on Day 3 😞

I have since, however, had the opportunity to watch the highly entertaining session titled It’s petty, let’s fight: A data-driven look at our most divisive and least consequential debates by Benn Stancil (co-founder of Mode, and now Field CTO of Thoughtspot). As Benn himself says at the very beginning of the talk, you may not get a ton of immediate value or ROI from the talk, but you will be pretty entertained about all the silly things data people choose to argue over–like leading or trailing commas (BTW: trailing commas are better 👍).

Some other sessions I hope to catch up on soon from day 3 are:

And that was a wrap on dbt Coalesce 2023. I definitely was a bit skeptical going into this year that sessions would remain as valuable as previous years, but I must say, even with more vendor-sponsored content taking the stage, the topics remained high-value and most of them didn’t feel like hard sales pitches. I hope we see that trend continue into Coalesce 2024, but I would love to see more community talks hosted next year 🤞.

Keep reading

See all stories