We’re all moving fast. Features are shipped, tests are run, outcomes are shared (briefly), and then we move on. But what happens when the same question comes up again next quarter? Or when a new team member asks why something was built that way? Or worse - when a brilliant idea is put into motion only to discover halfway through: “Wait, didn’t we already try this?”
This blog is about storing what matters - so your product brain doesn’t reset every time the roadmap does.
Think about how many times you’ve heard this in a meeting:
But the more dangerous and less obvious situation is when you’ve run a test in the past, and the results are actually relevant to something you're currently doing - and no one realises.
For example: we once ran a test comparing a slider vs dropdown for an input field on mobile. The dropdown resulted in more submissions overall, but we later discovered that users who used the slider were more likely to book. Why? Because the slider was harder to use - only more committed users completed the form. That learning applied far beyond just that one form. Any future decisions about mobile inputs could've benefited from it - if we had remembered it existed.
In this case, I did remember the test. But I’m sure there are countless others that I haven’t remembered. And that’s exactly the point.
Storing test and feature history is like creating a second brain for your product team. And not just for product - designers, engineers, analysts, and marketers all benefit from it. Everyone involved in shipping features makes faster, better-informed decisions when historical context is available.
For A/B tests:
For features:
You don’t need to store everything - but you do need to capture the right pieces of information in a structured way.
For every A/B test, include:
When you ship a feature - even without a test - store:
Done right, your test and feature history becomes a treasure trove of insight - not a dusty archive.
It helps you:
Real-world example:
A few years ago, we considered a design upgrade for a form that was historically underperforming. The plan required visual finesse, but we didn’t have that skillset in-house at the time - so the idea went dormant. A year later, after hiring a strong UI designer, we approached this area again to ideate on A/B tests.
Once more the discussion led to the same A/B test hypothesis as it had a year prior. Only at the end of the discussion we all took a step back and thought "Huh, didn't we already come up with this hypothesis a while ago?". We had indeed arrived at the same hypothesis a year prior, but because we had no structured way of tracking our hypotheses and their outcomes, it'd had become buried in an archive backlog out of sight and out of mind.
The result? A 20% increase in lead submissions.
That’s when we realised: we had already discussed and semi-validated this idea a year earlier, but had no structured way to surface it when the missing capability finally arrived. We missed out on a full year of performance gains.
You can start small. The tool matters less than the structure, consistency, and ability to retrieve what you need later.
Notion / Confluence / Google Docs
Great for startups or solo PMs. Set up templates for tests or launches and use tags, folders, or status fields to make them searchable.
Pros:
Cons:
💡 One huge pitfall to avoid: defaulting to date-based folder structures like
/experiments/YYYY-MM/
. This might feel intuitive, but when your archive grows, it becomes harder to surface relevant experiments later.
Better structure: Group by test area or test type.
Examples: “Input Changes”, “Checkout Funnel”, “Pricing Page”, “Homepage Hero”
Airtable / Coda / Asana (with custom fields)
Allow for better filtering, tagging, and status-tracking across experiments.
Pros:
Cons:
Statsig, Eppo, Optimizely, Amplitude, PostHog
Purpose-built to connect data pipelines to test setups and outcomes. Good for companies running frequent or complex experiments.
Effective Experiments, LaunchNotes, Productboard
Built for operationalising tests and feature updates across teams. Focus on visibility, sharing, and governance.
Pros:
Cons:
Startups often skip experiment and feature documentation entirely. Why?
Because they believe:
“We’ll remember.”
“We don’t have time to write this up.”
“We’ll come back to it later.”
But the reality is:
Teams grow quickly. People leave. Memory fades.
You’ll spend more time rediscovering old learnings than documenting them.
Without a system, good ideas disappear into Slack threads or someone’s laptop.
This isn’t bureaucracy - it’s making your learning compounding instead of disposable.
Every experiment and feature launch is a decision made. A choice. A moment of insight.
But without a system to capture it, that knowledge disappears as soon as the sprint ends or the person who ran it leaves the company.
The good news? You don’t need fancy tools. Just a consistent habit.
Create a space where your team can learn from itself - and make smarter, faster decisions next time around.