Times are a changin'
Over the last few months, the elephant in the room has become unprecedented economic instability across public and private markets. As market multiples contract, private tech darlings that were once hiring across the board have cut back their budgets and have even begun laying off a significant amount of employees. Unfortunately, this economic instability doesn’t have a clear end in sight, meaning that teams are now reconciling with the new reality of tighter budgets, slower growth rates and fewer job openings across the board.
Data teams are in a unique position entering this upcoming year. While many smaller and fast-growing teams have adopted data as a core part of the strategy, some data teams are still early in their maturity. As one data lead told me, “it feels like with the number of questions that come up daily, I hardly scratch the surface of any proactive analysis”.
This is primarily because of the size and inefficiency that exists around the data team. This post is not about that inefficiency. This post is instead written to help data teams that are now reconciling with their new reality. Tighter budgets, slower hiring, bigger workloads and potentially a shift in priorities could impact many data teams over the upcoming year. This post is about how data teams can adapt and make the most of this new reality.
In a downturn, there are ways that data teams can be productive, even in a tough economy. In this post, we'll go over some common approaches for running your data team in a downturn: budgeting; managing spending; developing documentation; leveraging open source tools and efficiently scaling data needs.
Budgeting with uncertainty in mind
The first step in managing your data team in uncertain times is to understand how uncertain and tight your budget has become. You may be new or have been involved with budgeting for years; either way, there are some key concepts to keep in mind when making decisions regarding how much money you want to spend on data and how to deal with smaller budgets.
The next step is to think about how to get the most out of those funds and how data best fits into your organization's goals. All stakeholders should know about the projects that require funding this quarter (or next year) so there is no sense that everything is being done behind closed doors without input from other parties involved. By maintaining this level of transparency, not only is wasted effort prevented, but it also prevents unforeseen circumstances from arising during the execution of the project.
Reducing spending across data tools
Once you've prioritized the right things, it's time to reduce your data team's spend. Here are some guidelines that will help you do so:
- Focus on what matters most. Make sure that you're investing in the right things. This means not cutting out documentation, training, hiring and important tools—you still need those things! It also means not cutting out communication or data quality or governance—these are critical components as well and they should be maintained as best practices during a downturn.
- Focus on the low-hanging fruit. Find the unused tables, dashboards, queries and pipelines that are driving up your costs. Start to identify who is using your data resources and where you can start to trim the fat. Because tools like Fivetran & Snowflake change pricing based on the amount of data, you can reduce their spending by analyzing how the unused assets in these tools are impacting your total spending.
- Ask for discounts. Vendors know that this is a tough time for many of their customers. Instead of just cancelling a subscription to a tool that you find useful, ask the vendor for a discount on your spending. This can help reduce the data team burn for the upcoming year.
- Find duplicate tools and consolidate. Finding that you have one too many BI tools? make it intuitive to simplify your data tools and remove any areas of the data stack that are not adding value or have similar value elsewhere.
- If your dashboards/tables are set up in a way that drives up costs, there are small changes that can be made that improve cost and efficiency significantly.
- Look for open-source alternatives. Although we’re a firm believer in the power of buying over a building, certain open source tools can significantly reduce spending without introducing more complexity. Airbyte, great_expectations, light dash and dbt are great examples of open source tools that don’t require much configuration to use.
Do more with less
To do more with less, your team needs to become more efficient and self-sufficient. As data teams are facing budget instability, having a plan for keeping your data strategy and processes documented in a way that makes sense to future generations of team members will go a long way. This is especially important as you and your team have to experience the instability of downsizing or upsizing your team quickly. Having a single place to go to when trying to find information associated with data can also give other team members clarity about how to use and where to find data.
This isn't a new concept, many teams already do this by documenting their work in an internal wiki or online document manager like Secoda. What you'll want to make sure is that you're documenting things like your data lineage, any custom queries or scripts that are being used, and step-by-step instructions for using them (including screenshots!).
Having this resource will help with onboarding new employees who need to get up to speed quickly on what's been done so far; it will also serve as great reference material for training purposes later on down the road when more complex tasks come up that require additional training time (or when there aren't enough people around).
Lastly, during a downturn, teams need to think about doing more with less around data. Tools that allow you to keep a repository of common questions or documentation around data can ensure that your team doesn’t have to answer the same question multiple times and can do more with what you currently have on hand.
Reducing complexity and scale platforms
The fourth and final principle is to reduce the complexity of your data architecture. As you're working through the process of prioritizing projects, it's important to keep this in mind. One way that you can do this is by focusing on what matters most:
- Focus on goals
- Understand your needs
- Get rid of things that don't matter
It's also important to find ways to scale platforms, as well as reduce complexity so you don't spend too much time making sure things work together properly when bigger problems are being solved elsewhere in the organization. Try as best as possible to suggest the solutions that drive business value with the least amount of time to show the potential that a data team could have across the company.
We’ve all heard the excitement about the MDS at this point, which has been great as a tool to provide easy access to data. But the short-term benefit of getting a data stack set up quickly is outweighed by the usage-based pricing models that don’t scale well as the company grows.
Ben Rogojan suggested using the famous Invent and Simplify approach from Amazon that can help keep costs low and overhead under control. As you grow your team, reward team members for continually looking for ways to simplify your system. By simplifying data infrastructure, you and your team can reduce the future maintenance and costs that are incurred by unintended complexity.
Leaning on open source alternatives
When you can’t find exactly what you need with your current stack or with cheap alternatives, it may be a good time to look for open source solutions that accomplish what you need.
certain open source tools can significantly reduce spending without introducing more complexity. Airbyte, great_expectations, lightdash and dbt are great examples of open source tools that don’t require much configuration to use.
At first, glance, building off an open-source tool appears to be a good option because it allows you to create a tool that is the perfect fit for your specific business model. But, teams who choose to build using open-source products can introduce unique challenges and that might end up requiring even more data engineering effort. The result may have been just as expensive as buying the solution. Teams should consider that it’s highly unlikely that they will get the resources to build this vision of the perfect tool internally.
Below are some of the pros and cons of building data tools from an open-source library or scratch:
- Free (in theory)
- Your team has complete control of the product and where you want to take it in the future
- Your team benefits from contributions made by other members of the open-source community
- You can fit the tool to your exact use case
- It takes a longer time to see value
- Building the initial proof-of-concept version is relatively easy, but generating deep features and making sure they are accurate gets increasingly complex and challenging.
- Any additional functionalities and integrations need to be built on a custom basis by your team.
Over time, the volume, complexity and scope of the tools might change as the needs of your business and technical requirements change. When you’re planning your product, you need to think about how the tool may change as things become more complex and need to be prepared to build support for that future iteration of the product.
Creating value for business users
As a startup, we are constantly talking to our customers about the ways that our product can better support their needs. As a data team, you should be doing the same with your business users. Trying to understand where are their roadblocks to adopting data and helping to drive business value for your core stakeholders can help reinforce the impact your team has on the company.
Airbnb’s data team ran an in-depth survey of employees and of their problems were conducted. From this survey, one constant emerged: a difficulty of finding information, which the collaborators need in order to work. The presence of tribal knowledge, kept by a certain group of people, is both counter-productive and unreliable.
The result: The necessity of raising questions to colleagues, the lack of trust in the information (data’s validity, impossible to know if the data is up-to-date) and consequently, the creation of new, but duplicate data, which astronomically increases the already existing quantity. To respond to these challenges, AirBnB created the Data Portal in 2017.
Follow the money
Making sure that your businesses financial models are correct can be one of the biggest value adds you can provide your team in a downturn. Although financial modeling can seem complicated and outside of the scope of the data team, your team can provide a lot of clarity over what is driving your revenue and costs.
You should focus with finance on understanding how much cash runway you have and what are the major areas you’re spending. You and your team can’t afford to miss any trends in leads, user engagement, sales pipeline, ROAS etc. As the data team, think about ways that you can make your executives more aware of cash and how it impacts your business. Plan with your executive team what actions you and the data team can take to help them guide the ship in the right direction. These financial models immediately start impacting larger decisions that touch multiple teams. You can’t delay these projects and have a clear view of your business.
You can help your team be productive during a slowdown.
Sometimes it's difficult to be productive, especially when you're working in a data team that's been hit by a downturn. Fortunately, there are ways to help your team stay on track and focused on the important things.
Document as much as possible. You should document everything you do during the day—from simple tasks like updating your status or checking email to more complex ones like working on reports or creating dashboards. When you document these activities and make the documentation available for others, other people can start doing them too. Lastly, make sure that everyone knows what is most important right now so they don't waste time doing things that aren't necessary. Your job as a data team is to provide maps for different leaders to use so they can make the best decisions as quickly and accurately as possible. Hopefully by showing value and focusing on driving results, you and your data team can help guide your team into a winning position as you navigate the troubled waters ahead.
Try Secoda for Free
Efficiency is crucial for data teams, and Secoda plays a pivotal role in achieving this. Secoda streamlines data workflows, enabling data teams to do more with fewer resources. Its user-friendly interface simplifies pipeline development, reducing development time and costs. The platform also facilitates seamless collaboration among team members, even in remote work scenarios, fostering productivity and cost-effective operations. Additionally, Secoda's data governance and security features ensure data compliance, mitigating risks associated with data breaches or regulatory violations that could result in costly fines.