Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry MPU chunk upload on timeout error #11969

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

harupy
Copy link
Member

@harupy harupy commented May 10, 2024

pip install git+https://github.com/mlflow/mlflow.git@refs/pull/11969/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Retry each MPU chunk upload when it fails with timeout.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Copy link

github-actions bot commented May 10, 2024

Documentation preview for 1eca080 will be available when this CircleCI job
completes successfully.

More info

@harupy harupy changed the title Timeout and retry MPU chunk upload [WIP] Timeout and retry MPU chunk upload May 10, 2024
Copy link
Collaborator

@smurching smurching left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harupy looks pretty good (once we have tests etc), thanks for this PR! Let me tag ML dev folks too for review

@smurching smurching requested a review from liangz1 May 10, 2024 16:23

def job(index):
start_byte = index * chunk_size
self._azure_upload_chunk(
self._azure_upload_chunk,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this recursive function reference needs to be removed, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the catch! must be removed.

@harupy harupy changed the title [WIP] Timeout and retry MPU chunk upload Timeout and retry MPU chunk upload May 13, 2024
@github-actions github-actions bot added the rn/none List under Small Changes in Changelogs. label May 13, 2024
Comment on lines 74 to 77
# Cancel timed out futures to avoid leaking threads
for index, f in timed_out_futures.items():
_logger.debug("Cancelling future for chunk %s", index)
f.cancel()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Futures in futures should have already finished at this point.

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
@harupy harupy changed the title Timeout and retry MPU chunk upload Retyr MPU chunk upload on timeout errror May 13, 2024
@harupy harupy changed the title Retyr MPU chunk upload on timeout errror Retry MPU chunk upload on timeout errror May 13, 2024
future = self.chunk_thread_pool.submit(

def upload_fn(index):
start_byte = index * chunk_size.get()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the 2nd .get() on this scalar?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, thanks for the catch!

Copy link
Member

@BenWilson2 BenWilson2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! Thanks for the elegant refactoring here @harupy :D

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
@harupy harupy changed the title Retry MPU chunk upload on timeout errror Retry MPU chunk upload on timeout error May 14, 2024
Comment on lines +319 to +331
#: Specifies the timeout in seconds for chunk upload future tasks for multipart upload.
#: Fractional values are accepted, e.g. 1.5 seconds.
#: (default: ``None``)
MLFLOW_MULTIPART_CHUNK_UPLOAD_TIMEOUT_SECONDS = _EnvironmentVariable(
"MLFLOW_MULTIPART_CHUNK_UPLOAD_TIMEOUT_SECONDS", float, None
)

#: Specifies the maximum number of attempts to retry a failed chunk upload for multipart upload.
#: (default: ``0``)
MLFLOW_MULTIPART_CHUNK_UPLOAD_MAX_RETRIES = _EnvironmentVariable(
"MLFLOW_MULTIPART_CHUNK_UPLOAD_MAX_RETRIES", int, 0
)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harupy We need some sort of default timeout before we retry, otherwise customers will encounter hanging uploads when they reach storage account ingress limits.

However, I'm concerned about setting too low of a default timeout for the overall operation - e.g. if the upload is making progress slowly, it should be allowed to continue.

I'd rather set a connection timeout, rather than a timeout on the overall request. E.g. similar behavior to https://realpython.com/python-requests/#timeouts:~:text=You%20can%20also%20pass%20a%20tuple%20to%20timeout%20with%20the%20following%20two%20elements%3A with read timeout None.

cc @smurching For a sanity check based on your findings during the investigation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, I'll file a separate PR for that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harupy harupy mentioned this pull request May 16, 2024
39 tasks
@harupy
Copy link
Member Author

harupy commented May 16, 2024

@dbczumar @BenWilson2 Do you think future timeout (with a long timeout length like 30 minutes for safety) for chunk uploads still makes sense to just time out when a hang occurs? The original code (as_completed without timeout) looks scary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants