Skip to content

bisq-network/growth

Repository files navigation

Bisq Growth Experiments

Introduction

This repository exists to capture ideas and experiments designed to grow Bisq, monitor their progress, and offer feedback.

Notable links:

Below are principles and processes to continuously explore ways to grow that we find helpful, inspired by the work of Brian Balfour, Sean Ellis and many others.

Principles

Learn continuously

Constantly find new ways to learn about our users, product, and channels for communication. Feed that learning into this growth process to improve it.

Establish and maintain momentum

Momentum is powerful. Establish a cadence for experimentation and feedback in order to fight through failures and get to successes. Find a rhythm that works.

Organize for autonomy

Individual contributors decide what they work on to achieve their OKRs (Objectives and Key Results).

Take ownership and be accountable

With autonomy comes accountability. You don’t have to be right all the time, but there is an expectation to improve to be eligible for compensation. In other words, mistakes are not a problem, but failing to learn from them is.

Process

The process shown below can be broken into two major parts:

  1. High-level planning, seen on top, and

  2. Continuous daily/weekly actions seen in the circle on bottom.

Growth Model

High-level Planning Process

This planning process should happen roughly every 2–3 months. The purpose is to "zoom out" and reconsider what’s most important now. How have the last few months gone? What has changed during that time, and how should our goals change to adapt?

Find Levers

The goal is to find the highest impact area that we can focus on right now given limited resources.

To evaluate each goal, think about following key points:

  1. Baseline

  2. Ceiling

  3. Sensitivity over time

Set Goals (OKRs)

In setting goals, focus on OKRs (Objectives and Key Results), and keep to the following principles:

  • Focus on inputs not outputs. OKRs should never be set on outputs, always on the inputs.

  • Keep OKRs in your face. Display OKRs in a place where we they cannot be forgotten by those who are accountable for them. They should be visible to anyone and everyone, and should be reviewed daily and weekly.

  • Keep OKRs concrete. Objectives are expressed as a qualitative goal, e.g. "Make virality a meaningful channel". Key Results are expressed as quantitative outcomes, e.g. "Grow viral/weekly active users by 1.5%". The timeframe for OKRs should be kept between 30 and 90 days.

Note
Goals should change mid-OKR only very rarely.

Explore Data

  • Qualitative. Example sources of qualitative data include user surveys, one-on-one conversations, user videos, forum discussions, arbitration tickets and more.

  • Quantitative. Example sources of quantitative data include Google Analytics, our own markets API, and other data sources we can access. The goal is to answer questions we don’t know about our users, products and channels. Break these data down into pieces.

Important
We should always be on the lookout for new ways to understand and better serve our current and potential users, but the prime directive is that individual user privacy must never be compromised by our efforts.

Continuous Daily/Weekly Process

1. Brainstorm

  • Capture. Never stop putting new ideas into the growth backlog

  • Focus. Focus on input parameters, not on output parameters.

  • Observe. How are others doing it? Look outside of your immediate product space. Walk through it together.

  • Question. Brainstorm and ask why, e.g.: What is… What if… What about… How do we do more of…

  • Associate. Connect the dots between unrelated things. e.g.: What if our activation process was like closing a deal?

2. Prioritize

Prioritize considering following key parameters:

  • Probability. Low: 20%, Medium: 50%, High: 80%

  • Impact. This comes from your prediction. Take into account long lasting effects vs one hit wonders.


Create a hypothesis, e.g.:

If successful, [VARIABLE] will increase by [IMPACT], because [ASSUMPTIONS].

Look at:

  • Quantitative data. Previous experiments, surrounding data, funnel data

  • Qualitative data. Surveys, forum, arbitration tickets, user testing recordings

  • Secondary sources. Networking, blogs, competitor observation, case studies

Create an experiment issue:

See the experiment issue template and other experiment issues for guidance and inspiration.

3. Test

What do we really need to do to test our assumption?

Setting up a Minimally Viable Test

  • Efficiency. What is the least resource-intensive way to gather data about the hypothesis?

  • Validity. The experiment must take into account how to get a valid result by designing a control group and required amount of data.

4. Implement

Get shit done.

5. Analyze

  • Results. Was the experiment a success or a failure? Be prepared for a lot of failures in order to get to the successes.

  • Impact. How close were you to your prediction(s)? Whether or not the experiment was a success, what results or effects did it produce?

  • Cause. The most important question you can ask is: why did you see the result that you did?

Update and close the GitHub issue as soon as you’ve finished analyzing.

6. Systematize

This is all about ways to automate and systematize our approach to growth.

  • Productize. Productize as much as you can with technology and engineering.

  • Create Playbooks. For the things you can’t productize, build into step by step playbooks to make them repeatable.