Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgraded to Sentis v1.4.0-pre.3 #6097

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

miguelalonsojr
Copy link
Collaborator

@miguelalonsojr miguelalonsojr commented Apr 18, 2024

Proposed change(s)

Sentis v1.4 upgrade.

Useful links (Github issues, JIRA tickets, ML-Agents forum threads etc.)

Types of change(s)

  • Bug fix
  • New feature
  • Code refactor
  • Breaking change
  • Documentation update
  • Other (please describe)

Dependency update.

Checklist

  • Added tests that prove my fix is effective or that my feature works
  • Updated the changelog (if applicable)
  • Updated the documentation (if applicable)
  • Updated the migration guide (if applicable)

Other comments

@miguelalonsojr miguelalonsojr force-pushed the develop-upgrade-sentis-1-4 branch 6 times, most recently from 5ccc26b to 0cd7f73 Compare April 19, 2024 17:51
@miguelalonsojr miguelalonsojr marked this pull request as ready for review April 19, 2024 17:53
Copy link
Contributor

@mplantady mplantady left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks good to me!

@@ -5,7 +5,7 @@
"unity": "2023.2",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to bump the version? And update the changelog?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changelog yes, version should be ok I think. Did 2023.3 come out already?

@@ -165,10 +170,6 @@ internal void ComputeCdf(TensorProxy logProbs, int batch, int channelOffset, int
{
// Find the class maximum
var maxProb = float.NegativeInfinity;
if (logProbs.Device == DeviceType.GPU)
{
logProbs.data.MakeReadable();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you add a CompleteOperationsAndDownload() in this method?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Good catch. This is for legacy support, but yeah, should be CompleteOperationsAndDownload for non CPU.

{
if (data.dataOnBackend.backendType != BackendType.CPU)
{
data?.Dispose();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it not needed to dispose when using CPU?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no need to copy to the CPU mem. It's already there and accessible. This really only applies, at least for us, when using GPU backends.

@@ -76,9 +74,10 @@ internal class ModelRunner
// D.logEnabled = m_Verbose;

sentisModel = ModelLoader.Load(model);
sentisModelInfo = new SentisModelInfo(sentisModel, deterministicInference);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could do
using var new SentisModelInfo(
To avoid having to call the dispose manually at the end (and which may not be called if an exception is thrown)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah. good point. will change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants