New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry MPU chunk upload on timeout error #11969
base: master
Are you sure you want to change the base?
Conversation
Documentation preview for 1eca080 will be available when this CircleCI job More info
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harupy looks pretty good (once we have tests etc), thanks for this PR! Let me tag ML dev folks too for review
|
||
def job(index): | ||
start_byte = index * chunk_size | ||
self._azure_upload_chunk( | ||
self._azure_upload_chunk, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this recursive function reference needs to be removed, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the catch! must be removed.
mlflow/store/artifact/utils/mpu.py
Outdated
# Cancel timed out futures to avoid leaking threads | ||
for index, f in timed_out_futures.items(): | ||
_logger.debug("Cancelling future for chunk %s", index) | ||
f.cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Futures in futures
should have already finished at this point.
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
future = self.chunk_thread_pool.submit( | ||
|
||
def upload_fn(index): | ||
start_byte = index * chunk_size.get() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the 2nd .get()
on this scalar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, thanks for the catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! Thanks for the elegant refactoring here @harupy :D
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
#: Specifies the timeout in seconds for chunk upload future tasks for multipart upload. | ||
#: Fractional values are accepted, e.g. 1.5 seconds. | ||
#: (default: ``None``) | ||
MLFLOW_MULTIPART_CHUNK_UPLOAD_TIMEOUT_SECONDS = _EnvironmentVariable( | ||
"MLFLOW_MULTIPART_CHUNK_UPLOAD_TIMEOUT_SECONDS", float, None | ||
) | ||
|
||
#: Specifies the maximum number of attempts to retry a failed chunk upload for multipart upload. | ||
#: (default: ``0``) | ||
MLFLOW_MULTIPART_CHUNK_UPLOAD_MAX_RETRIES = _EnvironmentVariable( | ||
"MLFLOW_MULTIPART_CHUNK_UPLOAD_MAX_RETRIES", int, 0 | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harupy We need some sort of default timeout before we retry, otherwise customers will encounter hanging uploads when they reach storage account ingress limits.
However, I'm concerned about setting too low of a default timeout for the overall operation - e.g. if the upload is making progress slowly, it should be allowed to continue.
I'd rather set a connection timeout, rather than a timeout on the overall request. E.g. similar behavior to https://realpython.com/python-requests/#timeouts:~:text=You%20can%20also%20pass%20a%20tuple%20to%20timeout%20with%20the%20following%20two%20elements%3A with read timeout None
.
cc @smurching For a sanity check based on your findings during the investigation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, I'll file a separate PR for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dbczumar @BenWilson2 Do you think future timeout (with a long timeout length like 30 minutes for safety) for chunk uploads still makes sense to just time out when a hang occurs? The original code ( |
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Retry each MPU chunk upload when it fails with timeout.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.