We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In /examples/inference/dlrm_client.py, the sparse features values are parsed as Int64 type.
/examples/inference/dlrm_client.py
Int64
id_list_features = predictor_pb2.SparseFeatures( num_features=args.num_id_list_features, values=to_bytes(batch.sparse_features.values()), lengths=to_bytes(batch.sparse_features.lengths()), )
However, in torchrec/inference/src/Batching.cpp line 171 and line 208, the combineSparse function regard sparse feature values as int32 type.
torchrec/inference/src/Batching.cpp
int32
auto values = at::empty({totalLength}, options.dtype(at::kInt)); ... len = featureLengths[j][i] * sizeof(int32_t); valuesCursor[j].pull(valuesRange.data(), len); valuesRange.advance(len);
Which regard int64 type as int32 type. If we print the value here, we will get:
int64
... 5012 0 9017 0 72546 0 63898 0 61197 0 31162 0 2567 0 89318 0 79668 0 ...
Which only takes half values in input.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
In
/examples/inference/dlrm_client.py
, the sparse features values are parsed asInt64
type.However, in
torchrec/inference/src/Batching.cpp
line 171 and line 208, the combineSparse function regard sparse feature values asint32
type.Which regard
int64
type asint32
type. If we print the value here, we will get:Which only takes half values in input.
The text was updated successfully, but these errors were encountered: