You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are some pointers that I think might help with the implementation!
The goal is to generate an extra, consolidated CSV file when several datasets are uploaded and an "Advanced > Consolidation" option is ticked. If you uploaded three datasets, enabled consolidation, then ran PGFinder, it should download four files. Three of those will be the three it returns now (one CSV file per dataset), and the fourth will be the consolidation of those other three.
You add an import from the JS side in the pyio import that stores the bool for whether you should consolidate or not
Then you check it's true and that you have more than one file in msData
Then you call the consolidation function on that list of individual CSV outputs
You could also change analyze to return a tuple of the original dataframe (matched) as well as the CSV, then you just need to take a DataFrame and don't need to do any more CSV parsing
Writing the actual consolidation function should be relatively straightforward:
Stack / vcat the dataframes from each dataset
Group by structure
Compute mean and stddev for each group (for parameters like intensity and RT)
Write into a final table
@smesnage should be able to supply another reference file against which to check this process!
Finally, on the web side, you'll need to add that option to the "Advanced" options for toggling this feature.
Add the button to this column of the advanced options:
Here are some pointers that I think might help with the implementation!
The goal is to generate an extra, consolidated CSV file when several datasets are uploaded and an "Advanced > Consolidation" option is ticked. If you uploaded three datasets, enabled consolidation, then ran PGFinder, it should download four files. Three of those will be the three it returns now (one CSV file per dataset), and the fourth will be the consolidation of those other three.
The place to loop into this is here:
pgfinder/lib/pgfinder/gui/shim.py
Lines 17 to 36 in a334000
pyio
import that stores the bool for whether you should consolidate or notmsData
analyze
to return a tuple of the original dataframe (matched
) as well as the CSV, then you just need to take a DataFrame and don't need to do any more CSV parsingWriting the actual consolidation function should be relatively straightforward:
@smesnage should be able to supply another reference file against which to check this process!
Finally, on the web side, you'll need to add that option to the "Advanced" options for toggling this feature.
pgfinder/web/src/routes/AdvancedOptions.svelte
Lines 19 to 54 in a334000
<input>
objectpgfinder/web/src/routes/AdvancedOptions.svelte
Lines 5 to 10 in a334000
Pyio
type — used to shuttle data from the webUI to PGFinder:pgfinder/web/src/app.d.ts
Lines 3 to 10 in a334000
pyio
field to the button state in the<AdvancedOptions>
:pgfinder/web/src/routes/+page.svelte
Lines 117 to 124 in a334000
That should be more than enough to get started with, and I'm happy to guide whoever through the implementation!
The text was updated successfully, but these errors were encountered: