Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Criteria for pull requests of new models? #37

Open
jkfindeisen opened this issue Nov 29, 2017 · 7 comments
Open

Criteria for pull requests of new models? #37

jkfindeisen opened this issue Nov 29, 2017 · 7 comments

Comments

@jkfindeisen
Copy link
Collaborator

Example: Michele Scipioni suggests in #35 that a model for PET pharmocokinetics is merged to this repository here (gpufit/Gpufit). It might be useful for some although already specific and require maintenance in the long run.

What should our general guidance/policies be in that regard?

@jkfindeisen
Copy link
Collaborator Author

The alternatives of an inclusion would be that they could be included later or others could maintain their own forks of Gpufit containing their own special models.

@mscipio
Copy link

mscipio commented Nov 29, 2017

As per the specific issue in #35 I am the first one suggesting you not to proceed with the merging.
The pull request is just there to show to someone who was asking before, what kind of models I am working with.

I plan to develop a more generic implementation in the next future (as you can read in the "help" of the model, that one is specific for a combination of the number of compartments and the chosen input function model, so there is nothing "generic" about it).

The first step to making it more generic would be to implement in Gpufit also the fitting of the input function (not so useful since it would be a fit of a single time curve). As soon as we figure out a way to implement numeric convolution, though, then we would be able to really make it generic.

My 2 cents about this issue: try to think of a way to make the library extendable via some sort of plugin.
Let me explain. So far to implement a new model we need a *.cuh file, and that's fine, but also to change constants.h and models.cuh. If you find a way to make it so that one can just focus on the *.cuh, it would be easier to have multiple forks that just differs for the *.cuh files that they make use of.

What do you think about this?

@jkfindeisen
Copy link
Collaborator Author

What do you think about this?

I agree with you. Gpufit should be easier to extend and it should allow the model to specify additional custom steps like a convolution and such things all in one file. It's definitely the way to go.

@superchromix
Copy link
Collaborator

We are discussing options for the simplification of Gpufit customization. For the time being, we will not be merging new models into the master branch.

@Superzorg
Copy link

I might have missed this in the comments, but has anybody considered passing in a function pointer to gpufit which defines the custom model? The upshot of this approach is that you don't need to modify Gpufit to implement and use a custom model. A good example of this technique is the qsort method in the C standard library - where the comparator can be defined by the user and passed into the qsort function. See the 'compar' option to qsort in the qsort man page.

There are other advantages to this approach as well, for instance there would no longer be a need for the user_info options as these would just be handled directly by the user themselves.

@superchromix
Copy link
Collaborator

Fully agree with this. We have tried using function pointers in CUDA for passing references to the model functions, but there was a huge cost in processing speed which made the solution unworkable. It is possible that there is another solution, which we are exploring, but for now the model function needs to be compiled into the code.

@Superzorg
Copy link

Great - sounds like you guys are on the right track! Just out of curiosity, what ended up being the performance issue?

Also, if there is ever a need for a C++ interface, the thrust library uses functors to pass in customized kernels like this. Functionally, it's the same as function pointers, but syntactically it's achieved by overloading operator() in a class (hence the name functor, a portmanteau of function and constructor).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants