Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DistDialect] Add pir nd_mesh reshard function #64223

Merged
merged 9 commits into from May 14, 2024

Conversation

pkuzyc
Copy link
Contributor

@pkuzyc pkuzyc commented May 11, 2024

PR Category

Auto Parallel

PR Types

New features

Description

Pcard-67164
Add nd_mesh reshard function in PIR, e.g. [Shard(0), Shard(1)] --> [Shard(1), Shard(0)]:

  1. Find the tensor dimensions where the dims_mapping values differ between src_dist_attr and dst_dist_attr.

  2. From higher to lower, convert the input tensor's non-replicated dimensions in step1 to replicated ([Shard(0), Shard(1)] --> [Shard(0), Replicate()] --> [Replicate(), Replicate()]):

    • Generate the 1-D sub mesh and dims_mapping of the tensor dim to be converted.
    • Call the corresponding 1-D reshard function to convert the non-replicated dimension to replicated.
  3. Convert the replicated dimensions in step2 to the status in dst_dist_attr with corresponding ([Replicate(), Replicate()] --> [Replicate(), Shard(0)] --> [Shard(1), Shard(0)]):

    • Generate the 1-D sub mesh and dims_mapping of the tensor dim to be converted according to the status in dst_dist_attr.
    • Call the corresponding 1-D reshard function to convert the replicated dimension to non-replicated.

Copy link

paddle-bot bot commented May 11, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@JZ-LIANG JZ-LIANG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

self.run_pr_to_rs_case()
self.run_pr_to_ss_case()
self.run_ss_to_ss_case()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add test for ss_to_rr, rs_to_sr (shard in same mesh dim and diff mesh dim) in future.

dist_input, self._mesh1, output_placements
)
dist_program = main_program.clone()
apply_reshard_pass(dist_program)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cross mesh scenario should call apply_partition_pass first to remove operation not belong to cur rank. (fix in next PR)

# Calculate local shape. In nd_mesh_reshard, multiple tensor axis
# may be shard and it will call this 1-D s_to_r function on each
# axis. In this case, we should recompute the local and global shape.
out_local_shape = list(in_value.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be better to regard the intermediate results within reshard transformation as local tensor and there is not need to assign dist_attr to it.

test_pir_reshard_nd_mesh_func MODULES test_pir_reshard_nd_mesh_func ENVS
"http_proxy=;https_proxy=;PYTHONPATH=../..:${PADDLE_BINARY_DIR}/python")
set_tests_properties(test_pir_reshard_nd_mesh_func
PROPERTIES TIMEOUT "35" LABELS "RUN_TYPE=HYBRID")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

timeout 35 might be to short, it is better to setting timeout as 90 for 4 gpus unitests take into account the overhead of communicators initialization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In local test it costs about 10s, it can be updated when adding more cases in the future.

@zhiqiu zhiqiu merged commit 2fa77ad into PaddlePaddle:develop May 14, 2024
30 of 31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants