Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

checking calculated rates for the manual test #17

Open
orbeckst opened this issue Jun 2, 2016 · 6 comments
Open

checking calculated rates for the manual test #17

orbeckst opened this issue Jun 2, 2016 · 6 comments
Assignees

Comments

@orbeckst
Copy link
Member

orbeckst commented Jun 2, 2016

The current manual testing protocol contains a short (0.5 ns) trajectory of I-FABP. The rates that come out of running it through hop are not very believable, see discussion following #11 (comment).

We want to figure out if these rates are correctly calculated but just not converged or if there's a deeper problem.

(This is also important if we want to use a short version of the test trajectory for proper tests #2.)

@orbeckst
Copy link
Member Author

orbeckst commented Jun 2, 2016

@iwelland suggested the following approach (#11 (comment)):

I have a 1 microsecond IFABP trajectory, let me compare rates on 500ps windows and also compare to 1ns + windows.

Without experimental data or a direct MD method to compute the rate we can try to just compare multiple windows/look at convergence; the variance in the rate distribution over several .5ns windows might be helpful.

@orbeckst
Copy link
Member Author

orbeckst commented Jun 2, 2016

I replied (#11 (comment)):
Good ideas! Can you

  • compute the rates from the full 1 µs trajectory (0 - 1000 ns)
  • from 5 short 1ns windows (such as 0 - 1 ns, 100 - 101 ns, 200 - 201 ns...)
  • from 5 very short 0.5 ns windows (0 - 0.5 ns, 100 - 100.5 ns, ...)

Then lets look at the raw rate.txt files. We can then decide how to continue, eg making histograms of rates, computing the rates over all blocks, etc. But I don't want to do anything exhaustive until we have a rough idea what to expect.

@orbeckst
Copy link
Member Author

orbeckst commented Sep 1, 2016

@iwelland , you put forward the hypothesis that the weird numbers are due to the fact that the exponential fits are generated from very short trajectories. Did you look into how good of an estimator the exponential fit is for rates with few/short events?

@iwelland
Copy link
Collaborator

iwelland commented Sep 1, 2016

Not yet, what do you think the best method is?

Some possibilities:

  1. Build a very simple Hamiltonian with two states and fit exponentials with different conditions
  2. Look for statistical tests which measure goodness of fit of this model and compute it for each rate
  3. Look in the literature for a paper on the subject

@orbeckst
Copy link
Member Author

orbeckst commented Sep 2, 2016

Simulating a simple two state model and then fitting it seems simple enough so why not try it?

If you find that you can accurately fit rates even with few events then this indicates that the assumption of a simple 2 state model in hop is flawed (at least for some sites).

(2) and (3) might be quicker, if you can find an answer quickly… what does Google or the ASU library search say?

On 1 Sep, 2016, at 16:07, iwelland notifications@github.com wrote:

Not yet, what do you think the best method is?

Some possibilities:

  1. Build a very simple Hamiltonian with two states and fit exponentials with different conditions
  2. Look for statistical tests which measure goodness of fit of this model and compute it for each rate
  3. Look in the literature for a paper on the subject


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

Oliver Beckstein * orbeckst@gmx.net
skype: orbeckst * orbeckst@gmail.com

@iwelland
Copy link
Collaborator

iwelland commented Sep 2, 2016

I will try 1, but I actually think the easiest thing to do is to consider several other plausible distribution functions and fit them to the 1micro Aqp1 data. Provided the bi-exponential fits are good in the first place, deviations in the simulation results vs. walker sim will give us a sense of how these things matter. We can also try various hand waves to discard edges/nodes to sanitize the model as a starting point before applying anything more rigorous.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants