Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing Split leads to variant failing by nearly 20% #520

Open
getabetterpic opened this issue Apr 13, 2018 · 2 comments
Open

Testing Split leads to variant failing by nearly 20% #520

getabetterpic opened this issue Apr 13, 2018 · 2 comments

Comments

@getabetterpic
Copy link

Hi, we've been running a few experiments using Split and noticed nearly all of our variants were losing over the control version by pretty hefty margins. So we set up a test to see how Split performed when splitting an 'experiment' where the control and the variant were the exact same experience. This test found the variant losing to the control by nearly 20% with a 95% confidence interval. Any suggestions as to what might be causing this or how we can improve the accuracy of our results? This was using the default splitting algorithm. Would switching to the block randomization algorithm help us out at all?

screen shot 2018-04-13 at 1 52 37 pm

@petebytes
Copy link

Ever figure this out?

@StefanFPF
Copy link

Similar problem here, with both control and T1 being the exact same behavior, there is a significant difference in conversion rate (> 1%) after about 10k participants and ~75% conversion. The other issue we are seeing that there is also more than 1% difference in participation, even though it is set to 50:50.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants