Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check for chunk size minimum before using multipart upload with S3 #11975

Merged
merged 1 commit into from May 17, 2024

Conversation

ian-ack-db
Copy link
Collaborator

@ian-ack-db ian-ack-db commented May 11, 2024

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

S3 requires chunk sizes to be at least 5 MiB for multipart upload documentation. This change does an assertion on chunk size env variable before commencing uploads.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Copy link

github-actions bot commented May 11, 2024

Documentation preview for 6c18920 will be available when this CircleCI job
completes successfully.

More info

@ian-ack-db ian-ack-db marked this pull request as ready for review May 13, 2024 17:19
@ian-ack-db ian-ack-db added the rn/bug-fix Mention under Bug Fixes in Changelogs. label May 13, 2024

def _validate_chunk_size_aws() -> None:
chunk_size = MLFLOW_MULTIPART_UPLOAD_CHUNK_SIZE.get()
if chunk_size <= _AWS_MIN_CHUNK_SIZE:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if chunk_size <= _AWS_MIN_CHUNK_SIZE:
if chunk_size < _AWS_MIN_CHUNK_SIZE:

to allow 5 * 1024**2.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed


def _validate_chunk_size_aws() -> None:
chunk_size = MLFLOW_MULTIPART_UPLOAD_CHUNK_SIZE.get()
if chunk_size <= _AWS_MIN_CHUNK_SIZE:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also validate the chunk size doesn't exceed the upper limit?

https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, added

Comment on lines 35 to 36
def _validate_chunk_size_aws() -> None:
chunk_size = MLFLOW_MULTIPART_UPLOAD_CHUNK_SIZE.get()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _validate_chunk_size_aws() -> None:
chunk_size = MLFLOW_MULTIPART_UPLOAD_CHUNK_SIZE.get()
def _validate_chunk_size_aws(chunk_size: int) -> None:

Can we pass chunk size to this function so the code would look like this?

chunk_size = ...
_validate_chunk_size_aws(chunk_size)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _validate_chunk_size_aws() -> None:
chunk_size = MLFLOW_MULTIPART_UPLOAD_CHUNK_SIZE.get()
def _validate_chunk_size_aws() -> int:

We can return the passed-in chunk if that's convenient.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to take the chunk size as a parameter. Kept None output for simpler testing / mocking setups.

S3 requires chunk sizes to be at least 5 MiB for multipart upload. This change does an assertion on chunk size env variable before commencing uploads.

Add max chunk size validation, chunk_size argument, and improve testing. Fix regex match on test to match shorter error message

Signed-off-by: Ian Ackerman <ian.ackerman@databricks.com>
Copy link
Member

@harupy harupy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ian-ack-db ian-ack-db merged commit 5c6e55b into mlflow:master May 17, 2024
41 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
patch-2.12.3 rn/bug-fix Mention under Bug Fixes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants