Sometimes you need to base decisions on more than gut feel. Make that most of the time. Especially in a world where data is all around. Growing exponentially. Being diligently collected, scraped, stored, organised and analysed. Being used to track and measure pretty much anything you can think of.
With data everywhere, not using it to drive your product development processes can mean you fail to achieve the success of your competitors. Saying that, it’s not all plain sailing by any stretch. Harnessing data – and, crucially, asking the right questions of it – is key to a winning product development strategy.

So where exactly is the data?
There are a couple of ways to look at the data available to you in product development. The data that’s going to tell you about the platform your product sits on. Everything from its performance, error rates, volumes and patterns of traffic. All the information you need to ensure you’ve got a platform that’s scalable, performant and meets users’ needs.
The other side is product-related data. How users are using it. What journeys they’re taking. Where they’re dropping out. How much time they’re spending in which places. In short, this tells you whether your product is delivering the desired experience. And, if not, where users are getting stuck (we’ll talk more about the ‘why’ later). Both of these sets of data are related of course, and both impact the success (or not) of the product.
You can also look at it as qualitative and quantitative data. So while data at the research stage of your product development process is vital – informing the product you’re developing through understanding what users are after – that’s only part of the story. You need to check in post-deployment to make sure it’s working. Refine it. Make it better and more relevant. There are lots of ways to collect and use data across the product lifecycle. One method of testing new features is through A/B or multi-variate testing. These allow you to deploy various ways of achieving an outcome. So you split test. Direct some users to the old functionality and some to the new. And then you wait and see which delivers the best results. Whether that’s which one increases conversion rates or speed. Or which is adopted more readily. Or whatever other benchmark you’re using.
Data-driven product development in action
Can data really help the product delivery process? The answer is yes. And at all stages. It’s critical from the start of the process, where you decide what product and features to develop, through to checking in one month or one year down the line to see if everything’s working as it should.
Data not only helps you validate your decisions – and build a business case that stands up in front of investors or stakeholders – but it also enables you to create a culture of experimentation. It helps you make the significant shift from predicting what users want (guesswork) to using data to know what they want (informed decision-making). The data tells you whether a feature is working. Whether to persevere down a particular path or ditch an idea all together. It’s incredibly important to continue this cycle of experimentation and testing throughout the product’s life to keep it fresh.
This is the secret of success behind a large entertainment organisation we work with. It continually looks for ways to engage more effectively with users, and it does this by understanding what experiences hit the mark. It employs the split testing method I mentioned earlier – and all their decisions about which features to roll out, update, or scrap come from this data. Essentially, it becomes adept at pivoting quickly to provide customers with what they want, based on how customers interact with the platform.
Democratising data and leveraging experience
A major challenge in adopting a data-driven product development approach is, very basically, access to data. Often, the teams that need the data simply can’t get at it – whether that’s because organizational structure doesn’t allow it, or because it’s sitting in silos all over the business.
You also have to know what questions to ask to get the right answers. This is easier when you have a clear, user-friendly dashboard through which you can interrogate the data in different ways. Less so if you need to raise a ticket with the data team, which might take days or even weeks to churn out a response. At this stage it’s quite possible you realise you’ve asked the wrong question, so you have to start the whole process again, and you’re still weeks away from being able to take any action.
Knowing what to ask also impacts what testing you do. A retailer we consulted with had used A/B testing with a focus on statistical significance. But it didn’t factor in that the small amount of traffic the platform saw meant they were running for months before they hit that statistical significance. So for months the platform could be underperforming but no decision could be made about what to change to make it better. While the intention of using data was good, the process did nothing to enable fast decision-making. It achieved the exact opposite.
Related to this is the issue of using data appropriately and at the right level. Just because there’s an abundance of it (a lot of the time, too much) doesn’t mean you need to use it all. Most organisations don’t have the mature systems and decision-making processes in place to effectively use every unit of data. Nor do they need to. For most, checking every data point will slow down delivery and impede product development. The lesson is, be clear what decisions you need to make, and how often you need to make them.
A final bit of advice. Data is invaluable and can infinitely improve product development, but it’s not infallible. It’s only as good as the people that steer the testing and interpretation of it. Human judgement and experience are essential, so make sure you use both to plan how to measure the right metrics and execute your data-driven product development strategy.
Please do get in touch if there’s anything contained in this blog which you’d like to discuss further.