T

he speed at which a startup grows is directly related to the speed at which it learns. The same applies to the growth team.

This applies to the growth team as well.

In the growth team, we wanted to implement a strategy that would allow us to be quick on our feet and focus on the right projects. One of the first things we decided to do was to create a framework, which we use on a weekly basis, to define which experiments we are going to invest in, allowing us to prioritise tasks and, in the end, analyse them according to their impact.


The following lines detail Pedro and the growth team’s experience, told in first person, and include how we created this framework and use Airtable.


About Pedro

Let's start with Pedro Costa. A big fan of the daily workout, the gym is one of his favourite out-of-work places; he loves to study and read up on various topics, to network with his peers and is an inveterate foodie. He loves discovering new restaurants and plans trips around gastronomic experiences. He lives in Porto but never forgets Braga (his hometown), and keeps alive the desire of, one day, returning to base.


Pedro Costa started working at Coverflex in January 2021. He joined the company to work in the Marketing and Growth team with Luís Rocha (CMO) and Maria Furtado (Content Manager), mainly in Coverflex’s go-to-market strategy. During the first few months, Pedro tested different strategies, different channels and different approaches to the market and, after three months, he started looking at the results of the strategies that were implemented. That was when our first clients started emerging.


Why build an experimentation framework?

The foundations for developing this framework are directly related to the expected impact that the growth team will have on the company's growth.


In the first three months, we used an archaic experimentation process where the goal was to simply test which channels might work and measure the "low hanging” results of these experiments. However, after determining the first channels, we decided to deepen our model and create an experimentation process that would bring us more visibility, prioritisation, forecast, and insights into what resources are needed to implement these initiatives. 


Visibility: The framework should allow the entire team to have a clear understanding of what we are working on in each project and which initiatives we are developing per surface (product, segment, channel...);


Prioritisation: The framework should allow the team to filter the initiatives that generate more revenue per working day (more on this in the next topic);


Forecasting: The framework should allow people to understand what impact each idea might have after sizing each experience;


Resources: The framework should help us understand which initiatives we have available as well as the necessary resources.


Through these foundations we decided to create a framework that we use today and that we review weekly at our growth experimentation meeting. At a glance, each team member can see which initiatives they worked on, the expected revenue from that initiative and, finally, demonstrate what results the initiatives had in previous weeks.


1. Prioritise ideas based on revenue generated per working day

In our team, we define the priority for ideas based on their monetary impact per working day — this means that the ideas we implement first are the ones that will bring in the most money at the end of the day.

For this, we use a model we call “growth velocity”, which analyses: the potential of the idea, the effort (in days of work) that it requires from the various departments (design, engineering, content, product and growth) and the time it takes us to to carry out the initiative, and we relate these factors to the metric of the internal forecast we do for each product.

A practical example of this growth velocity metric is, imagining that we want to test a new channel: we define the metric that this experience will have (ie. new visitors). Then, for each visitor, we internally establish a forecast to understand the visitor-to-customer conversion rate (how many visitors we need to convert a customer) and the value that each customer adds, on average, to each product in ARR. The result is the multiplication of this value by the confidence we have in the specific initiative, divided by the number of days needed to develop the idea.


The formula would be as follows:

Growth velocity = Count of impact metric (p.e 7 new visitors) * Value of impact metric (p.e 1 visitor = €10) * Success Confidence (p.e 0.9) / Time (ie. 1 day)

In the end, the result is the revenue that we will generate, per day, for each of the experiments. This way, we can review each of the initiatives and identify those that will generate the most money in the fewest possible days.

This was one of the ways we found to align priorities within the Marketing team and to align this particular department’s work with the company's objectives, so that each initiative can bring value to the ARR. That way, the entire company understands the impact that growth is generating.


2. Build a user based hypothesis

Each of the ideas must necessarily have a user based hypothesis in order to be approved. After each sprint, when deciding which ideas we should launch and, after looking at the Value of €€ per day of work, this is what we discuss internally.

After a few iterations of the user based hypothesis, this is the current version we use to discuss each of the experiments we have:


Because we saw this {insert qualitative/quantitative data} ... I believe this will happen... and will increase/decrease {insert metric name} ... for this group {segment} ... because.... { insert user logic/emotion}

if this is true...we will need {insert number} amount of days to build it, and we need to wait {insert number} amount of days to do analysis, and then we will see an increase of {insert number} metric (abs. numbers)


The purpose of the hypothesis is to allow us to understand what, in fact, will happen and why. At the end of each experiment we look again at the hypothesis and try to understand if it was correct or not, why and what are the next steps.

In the hypothesis it is also important to add qualitative and quantitative data. This is the way to make it more realistic.

3. What to do if the hypothesis is confirmed

After defining the hypothesis, we focus on realising what we can do if the hypothesis is correct. Most of the time, we try to understand which of the next initiatives may be related to automating processes, expanding to other channels (vertical) or even optimising the way we do things within the channel itself (horizontal expansion).

The goal is to understand, even before launching the test, what we are going to do next and strategically think about the scalability of these initiatives.


4. Leading & Lagging Metrics and reporting

After we decide exactly what to do after the experiment, it's time to define the leading and retention indicators for each of the experiments we performed.

Using the above example - of increasing the volume of new visitors - the leading indicator is, in fact, "new visitors". However, we try to use a metric that allows us to understand whether or not these visits are qualified: for example, knowing how many of these new visitors go on to the “Schedule a demo” page. This allows us to understand the "quality" of the leading indicators and if the traffic we are acquiring is, in fact, qualified traffic.

After defining the experiment, we decide how and where we are going to measure this task. Ideally, we create a dashboard, but if we are unable to do so or if we want to test faster, we create small reports in tools such as Amplitude, G. Analytics, Hubspot, Data Studio, Convertize, among others.

This is how we ensure that we know what we are going to measure and how the initiative translates into success; at the same time, anyone can track each experiments’ results.


5. Speed of Execution

The speed with which we are able to launch each initiative is very important to us.

We initially tried to work on too many projects at once and realised that this was counterproductive. Right now, we focus on two or three initiatives per week, per person, and we try to launch each initiative within two weeks of the beginning of the process. 

The trick is to define the initial part well, using a good hypothesis and having a good sizing plan — this process guarantees a much faster execution.

Another important element in the management of these initiatives is what we call the "portfolio of investment". We try to reach a balance between the riskier initiatives that can bring great results and those that are less risky and bring lesser results.


6. Analysing results

After executing the task, we define a start date and the date on which we should analyse the task (which is the same date as in our hypothesis) on Airtable. We put the task as "To Be Analysed" and, every week, we look at the tasks and review the ones reaching the deadline to be analysed.

When a task is launched, there is a notification on our Slack channel dedicated to "growth_launches", which includes information on who was responsible for launching the task, which metric it will affect, the name of the initiative, the user-based hypothesis and what value we believe the initiative will generate for each working day.

When the task reaches the analysis phase, there is a column on Airtable with three questions:


1 - Why is the hypothesis/prediction wrong or right?


2 - Why did the experiment turn out the way it did? (for failure/success)


3 - What are the following experiments to continue testing & to scale this hypothesis?

The goal is for each person to look at the task again, see if the results are identical to the forecast and hypothesis that we defined, and understand why it worked or why it didn’t. Based on these conclusions, we can understand how to scale the hypothesis (if it has been successful).

After each experiment and analysis, the person responsible for executing the task is also responsible for defining the task as “inconclusive”, “failure - valuable to tweak”, “failure - not valuable to tweak”, “Success - continue exploring”, “Success - scale".

As soon as the task is analysed, regardless of the result, in another Slack channel called "growth_learnings" there is a new notification with the conclusions of the initiatives launched and with information about the person responsible for executing the task, which metric this task influenced, what was the "user based hypotheses", the "Experiment result", the "expected revenue contribution", the "real revenue contribution" and the "experiment post mortem".

This is how we guarantee the entire team can have immediate access to the initiatives that have been analysed and we can open discussion groups about the success, learning and failure of each experience.



Completed experiments

After analysing each task, we have a tab on Airtable that allows us to see which tasks we launched and all the results, hypotheses, expected revenue generated per day of work from all tasks as well as many other fields.

This is how we can view the tasks launched every month, the main results, the greatest learnings and even identify opportunities within each channel.


Review per quarter

The ultimate goal every quarter is to take a look back at the initiatives we've launched and see if we're making progress.


By progressing, what we want to understand is:


  • In terms of output, how many experiments we have released during the last quarter and whether we are increasing the volume of experiments.
  • The accuracy of our hypotheses — that is, whether they are becoming increasingly correct over time as a consequence of the results. With more data and more experiments, we expect to learn more, adjusting new strategies and tactics.
  • Lastly, the average of successes per failure or inconclusive experiment that we release, and compare with the previous quarter.

What we don't include in this Growth Experimentation

Because this framework is very focused on experiments that are measurable, we decided not to include some notoriety tasks, such as events or organic publications on social media, among other tasks. We use Clickup to manage those kinds of initiatives.

It is important to reinforce that this framework should not be used by companies that have not yet found product-market fit.


Additional Tools


Although we use Airtable  to manage the experiments, we use other tools that are transversal, not only to the team but to the entire company, and that help us document this process.

We use Clickup to align with the entire team and company about OKRs, Miro to outline the processes that we are going to implement or develop and, finally, Notion to document these processes, which serves as a history of various initiatives.