-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addition of residual block support for pyg_to_hls #645
Open
green-cabbage
wants to merge
42
commits into
fastmachinelearning:main
Choose a base branch
from
cms-p2l1trigger-tau3mu:MiaLiu
base: main
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Addition of residual block support for pyg_to_hls #645
green-cabbage
wants to merge
42
commits into
fastmachinelearning:main
from
cms-p2l1trigger-tau3mu:MiaLiu
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add files via upload Update hls_layers.py Update hls_layers.py Update hls_model.py Update hls_model.py Update vivado_writer.py Add files via upload Update vivado_template.py Update vivado_writer.py added vec_to_mat, mat_to_vec Delete nnet_graph.h Add files via upload handling for 1D-vector inputs and outputs Update vivado_writer.py Update pyg_to_hls.py Update hls_layers.py Update vivado_template.py Update pyg_to_hls.py Update hls_layers.py Update nnet_activation.h Update nnet_dense_resource.h wip Add files via upload Add files via upload added testbench handling for concatenation added weights tranpose, deleted hard-coded testbench add nnet::matrix_config cleaning up add HLSModel_GNN._get_top_function(), HLSModel_GNN.predict(..) top-level conversion function (starting ground) update naming conventions:'Rn' --> 'node_attr', 'Re' --> 'edge_attr'. update order of inputs: 1.node_attr, 2.edge_attr, 3.edge_index update naming conventions, edge_index shape (now, edge_index.shape=[N_EDGE, 2]) add max, mean aggregation. update naming conventions generalize all the aggregation methods in a single function cleaning up added precision and testbenching added precision handle added handling for different flow direction cleaning up from 'save intermediates' slight improvement on max aggregation fixed max-aggregation minor updates generalizing pyg_to_hls() tidying up improved handling for user-input precision re-included extra #pragma HLS UNROLL, don't know if this is correct yet implemented LUT-division missed a semicolon added handling for initial aggregation layer added Aggregate layer, self._check_inputs() added Aggregate layer use existing test bench code update naming conventions tidying up, added #pragma HLS UNROLL to nnet_array::vec_to_mat/mat_to_vec ditched single-edge aggregation functions re-added single-edge-aggregation functions re-added #pragma HLS ARRAY_PARTITION speciy model inputs; parition merge cleaning up, update naming conventions ditched single-edge-aggregation functions, improved sender-index/receiver-index handling changed 'else if{' to 'else{ if{' split different aggregation methods into separate functions max not fully functional yet, just committing changes before switching branch got max-aggregation LOGIC working, still testing build_prj.tcl fixed up max-aggregation again
…ing. Also may need to move the residual block to nnet_merg.h bc it's a more of a merge block than a graph block
…y gives bad conversion so I commented it off for now. Still getting bad MSE for our own data, but getting good MSE values with data given in the walkthrough
… of MSE compared to the model with only encoder on it
…oder class, but using nnet::dense_resource for nodeblock class
…caler for tau3mu dataset is unnecessary if integer bit width is sufficient
…caler for tau3mu dataset is unnecessary if integer bit width is sufficient
…agnitude of e-6, though now good chunk of them are in order of magnitude of e-5
… problem due to cpp names in parameters.h really have to fix that later
jmitrevs
reviewed
Jan 18, 2023
@@ -331,6 +333,66 @@ void softmax(hls::stream<data_T> &data, hls::stream<res_T> &res){ | |||
} | |||
} | |||
|
|||
// template <class data_T, class res_T, typename CONFIG_T> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented out code, printouts, etc, needs to be removed. New printouts should also be removed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is my first time submitting a pull request, let alone for work, so corrections are appreciated.
Description
Type of change