Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback from multiple references #124

Open
marijanbeg opened this issue Nov 16, 2021 · 2 comments
Open

Feedback from multiple references #124

marijanbeg opened this issue Nov 16, 2021 · 2 comments

Comments

@marijanbeg
Copy link
Contributor

Let us say we have N reference implementations for a particular problem. The student works on the problem and submits their solution. We, using PyBryt, compare the student's implementation against N different reference implementations. This results in N feedback reports (what annotations are (not) satisfied in each reference implementation). The question is: What feedback do we give back to the student?

  • Giving all N feedback reports to the student can be very confusing to the student and the student would not know what feedback to follow.
  • Could the solution be to derive a metric by which we can specify "how close" the student is to the particular reference implementation. This way, we provide the feedback report of a reference solution the student is most likely implementing.
  • Should there be a more sophisticated logic behind the scenes? For instance, if the student imported NumPy (or created an array of zeros), it is most likely they are following a particular reference.

This is the summary of some of the open questions we started brainstorming in one of the previous tech meetings to encourage the discussion. All ideas are welcome :)

@rolotumazi
Copy link

rolotumazi commented Nov 17, 2021

I have a conceptual plan on how to deal with N references and I add it here to further the discussion:
Perhaps instead of applying N reference solutions it might be better to annotate values, invariants, collections, etc.
After that create a "fit matrix" that can be be used to devise a score based on the weighted combinations of the values, invariants, collections, etc that appear in the the student submission.
Veto's can also be used. The report corresponding to the best score can then be given as feedback to a student.
I feel like this might require a bit of tinkering to work and very problem dependents. It works as an abstraction layer to the annotations which itself needs annotation.
Where I can see it fails would be when all scores are low or a few are quite high with values close together.

@rolotumazi
Copy link

After further discussion with @marijanbeg and much needed simplifying to be in line with what what I understood @chrispyles to be saying in our previous meeting, I've written my thoughts on this issue [attached].
pybryt_multi_refs.pdf
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants