Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token preemptively completing experiments... #160

Open
danieljwilson opened this issue Mar 21, 2022 · 12 comments
Open

Token preemptively completing experiments... #160

danieljwilson opened this issue Mar 21, 2022 · 12 comments

Comments

@danieljwilson
Copy link

Version of Experiment Factory:

Latest

Expected behavior

Following the guide for Usage > Use Tokens I attempted to provide a pre-generated URL using the suggested format:

https://<your-server>/login?token=<token>

I expected to start the battery of experiments with the results saved to the folder associated with the provided token.

Actual behavior

What happened instead was that a banner appeared saying that I had completed all the experimnets:

image

Also the user folder on the server was also marked as _finished (appended to the folder token name)

Steps to reproduce behavior

You can run from my public repo - also I have tried this with various combinations of experiments and the behavior has occurred each time.

@vsoch
Copy link
Member

vsoch commented Mar 21, 2022

So to clarify this only happens with adding the token to the URL, and not when you provide directly in the interface?

@danieljwilson
Copy link
Author

Yes this happens when I add the token to the URL. Sorry not sure what that means to provide it directly in the interface? Do you mean in interactive mode?

@vsoch
Copy link
Member

vsoch commented Mar 21, 2022

yep! So normally there is a box you can enter it in (not in the browser url), and I would call that some kind of interactive mode. Does everything work okay with that method?

@danieljwilson
Copy link
Author

I will try that and report back!

@vsoch
Copy link
Member

vsoch commented Mar 21, 2022

I'm in the middle of the work day so I can't debug for you, but I can tell you what I think is going on (and how to debug!) So this function:

@app.route("/next", methods=["POST", "GET"])
def next():
# Headless mode requires logged in user with token
if app.headless and "token" not in session:
return headless_denied()
# To generate redirect to experiment
experiment = app.get_next(session)
if experiment is not None:
app.logger.debug("Next experiment is %s" % experiment)
template = "/experiments/%s" % experiment
# Do we have runtime variables?
token = session.get("token")
if app.vars is not None:
variables = get_runtime_vars(
token=token, varset=app.vars, experiment=experiment
)
template = "%s?%s" % (template, variables)
return perform_checks(template=template, do_redirect=True, next=experiment)
return redirect("/finish")

It looks like the last block is being skipped over (no experiment found) and then it's going directly to finish. So what we are going to do is put a ton of print statements around there, and install from your local version to build the container. E.g., from your repo

git clone https://github.com/expfactory/expfactory

And then in your Dockerfile instead of:

WORKDIR /opt 
RUN git clone -b master https://github.com/expfactory/expfactory
WORKDIR expfactory 

Do:

WORKDIR /opt 
COPY ./expfactory /opt/expfactory
WORKDIR expfactory 

And then in that function, try something like this:

@app.route("/next", methods=["POST", "GET"])
def next():

    # Headless mode requires logged in user with token
    if app.headless and "token" not in session:
        return headless_denied()

    # To generate redirect to experiment
    experiment = app.get_next(session)

    # I suspect you are going to see None here!
    print(experiment)
    print(experiment is not None)
    if experiment is not None:
        app.logger.debug("Next experiment is %s" % experiment)
        template = "/experiments/%s" % experiment

        # Do we have runtime variables?
        token = session.get("token")
        if app.vars is not None:
            variables = get_runtime_vars(
                token=token, varset=app.vars, experiment=experiment
            )
            template = "%s?%s" % (template, variables)

        print("We got to performing checks, next experiment is %s" % experiment)
        return perform_checks(template=template, do_redirect=True, next=experiment)

    # But I think you are skipping the above and hitting down here
    return redirect("/finish")

so then the question is - why is app.get_next() returning there aren't experiments? That bit is here:

def get_next(self, session):
"""return the name of the next experiment, depending on the user's
choice to randomize. We don't remove any experiments here, that is
done on finish, in the case the user doesn't submit data (and
thus finish). A return of None means the user has completed the
battery of experiments.
"""
next = None
experiments = session.get("experiments", [])
if len(experiments) > 0:
if app.randomize is True:
next = random.choice(range(0, len(experiments)))
next = experiments[next]
else:
next = experiments[0]
return next
. I suspect it's because there aren't experiments in the session, so this will be empty:
experiments = session.get("experiments", [])
. For that, I would then check this function:
def setup(self):
"""obtain database and filesystem preferences from defaults,
and compare with selection in container.
"""
self.selection = EXPFACTORY_EXPERIMENTS
self.ordered = len(EXPFACTORY_EXPERIMENTS) > 0
self.data_base = EXPFACTORY_DATA
self.study_id = EXPFACTORY_SUBID
self.base = EXPFACTORY_BASE
self.randomize = EXPFACTORY_RANDOMIZE
self.headless = EXPFACTORY_HEADLESS
# Generate variables, if they exist
self.vars = generate_runtime_vars() or None
available = get_experiments("%s" % self.base)
self.experiments = get_selection(available, self.selection)
self.logger.debug(self.experiments)
self.lookup = make_lookup(self.experiments)
final = "\n".join(list(self.lookup.keys()))
bot.log("Headless mode: %s" % self.headless)
bot.log("User has selected: %s" % self.selection)
bot.log("Experiments Available: %s" % "\n".join(available))
bot.log("Randomize: %s" % self.randomize)
bot.log("Final Set \n%s" % final)
. Note that since we pipe the log into a file (and then /dev/null to keep the container running) you will likely want to tweak startscript.sh so it just prints to the console and you can use docker logs to see it. And I've been wanting to make it easier to see logs in that respect, so if you find a change that works please open a PR! Keep me updated!

@danieljwilson
Copy link
Author

This is pretty great "middle-of-workday" feedback!

I am in France right now so heading to bed atm but will take a look at this in the morning!

@vsoch
Copy link
Member

vsoch commented Oct 4, 2022

Any progress or updates here?

@danieljwilson
Copy link
Author

I haven't tried this is a while - have been giving people tokens for them to enter instead. I have this semi-automated using airtable automations (in terms of emailing each participant their token) so this has been my workaround.

@vsoch
Copy link
Member

vsoch commented Oct 5, 2022

Safe to close then until someone reproduces?

@danieljwilson
Copy link
Author

I think that is fine! Thanks for the follow up :)

@vsoch vsoch closed this as completed Oct 5, 2022
@AlvaroAguilera
Copy link

I can reproduce this issue in the current version.

@vsoch
Copy link
Member

vsoch commented Dec 16, 2022

oh that's good! Can you use my previous instructions #160 (comment) to debug? Re-opening too.

@vsoch vsoch reopened this Dec 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants