Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GCSFuse performance on Vertex AI custom training job #1830

Open
miguelalba96 opened this issue Apr 7, 2024 · 6 comments
Open

GCSFuse performance on Vertex AI custom training job #1830

miguelalba96 opened this issue Apr 7, 2024 · 6 comments
Assignees
Labels
p2 P2 question Customer Issue: question about how to use tool

Comments

@miguelalba96
Copy link

miguelalba96 commented Apr 7, 2024

Description

I am encountering data loading throughput issues while training a large model on Google Cloud Platform (GCP). Here's some context:

I am utilizing Vertex AI pipelines for my training process. According to GCP documentation, Vertex AI custom training jobs automatically mount GCS (Google Cloud Storage) buckets using GCSFuse. Upon debugging my training setup, I've identified that the bottleneck in data loading seems to be related to GCSFuse, leading to data starvation and subsequent drops in GPU utilization.

I've come across performance tips that discuss caching as a potential solution. However, since Vertex AI configures GCSFuse automatically, it's unclear how to enable caching.

Should I configure caching at runtime when running the training job?
When building the docker image that contains my code to run as custom job should I mount manually the bucket and specify cache-dir, won't that be reconfigured by vertex AI when submitting the job?

Additional context

I am running distributed training on a 4-node setup within Vertex AI pipelines. Each worker node is a n1-highmemory-16 machine equipped with 2 GPUs.

I am using google_cloud_pipeline_components.v1.custom_job.create_custom_training_job_from_component to create the custom training job.

In my code, I'm simply replacing gs:// with /gcs/ as per the GCP documentation for Vertex AI.

Type of issue

Information - request

@miguelalba96 miguelalba96 added p1 P1 question Customer Issue: question about how to use tool labels Apr 7, 2024
@gargnitingoogle gargnitingoogle self-assigned this Apr 8, 2024
@gargnitingoogle
Copy link
Collaborator

hi @miguelalba96 thanks for asking the question and for providing the full context of the problem.

it's unclear how to enable caching

File-caching in GCSFuse is a new feature that was added in v2.0.0, about 3 weeks ago. Unfortunately, the vertex AI pipelines still use an older version of GCSFuse, which doesn't support the file-caching feature, so as of now, there is no way to enable/configure file-caching in gcsfuse using vertex AI pipeline.

Should I configure caching at runtime when running the training job?

As I said above, this is not possible through Vertex AI job creation interface right now.

should I mount manually the bucket and specify cache-dir, won't that be reconfigured by vertex AI when submitting the job?

If you can take control of which gcsfuse version is installed in your container and how it is used to mount buckets, then this might be possible. In this case, you need to install GCSFuse v2.0.0 (instructions) in your container and mount buckets using the config-file needed for desirable file-cache parameters (doc).

won't that be reconfigured by vertex AI when submitting the job?

This is outside my expertise area, but I can try to guess. If you install gcsfuse in your training container and mount your bucket at /gcs/<bucket-name>/, Vertex AI might override that mount. But I am not sure about it.

@marcoa6
Copy link
Collaborator

marcoa6 commented Apr 8, 2024

Hi @miguelalba96, Vertex AI has GCSfuse v1.x currently which has stat and type caching enabled by default. The new file cache feature is only available in GCSfuse V2, which has not been rolled out by Vertex yet. We can let you know once its available, but I also suggested opening up a ticket/feature request for the Vertex AI team directly so they can track this.

@gargnitingoogle
Copy link
Collaborator

Lowering priority down to P2 as the upgrade of vertex ai to gcsfuse V2 is already in plan and is also outside the scope of the GCSFuse team.

@gargnitingoogle gargnitingoogle added p2 P2 and removed p1 P1 labels Apr 10, 2024
@gargnitingoogle
Copy link
Collaborator

Wanted to update here that VertexAI training upgrade to GCSFuse v2.0.0 is now complete. The VertexAI training team is now working on enabling GCSFuse file-cache feature in training jobs. Once that completes, the original problem that @miguelalba96 faced might get fixed. Though, AFAIK, VertexAI won't be providing users controls to configure file-cache feature parameters as Miguel asked.

@tiagovrtr
Copy link

Wanted to update here that VertexAI training upgrade to GCSFuse v2.0.0 is now complete. The VertexAI training team is now working on enabling GCSFuse file-cache feature in training jobs. Once that completes, the original problem that @miguelalba96 faced might get fixed. Though, AFAIK, VertexAI won't be providing users controls to configure file-cache feature parameters as Miguel asked.

I'm keen to be able to use the file cache on the /gcs/ mount point, can we get updates on this somewhere? Thanks

@marcoa6
Copy link
Collaborator

marcoa6 commented May 9, 2024

@tiagovrtr the Vertex team is working on the integration, but dont have a timeline yet. Will report back

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p2 P2 question Customer Issue: question about how to use tool
Projects
None yet
Development

No branches or pull requests

4 participants