diff --git a/google/cloud/dataproc_v1/proto/autoscaling_policies.proto b/google/cloud/dataproc_v1/proto/autoscaling_policies.proto index 8d10a86f..bec577e4 100644 --- a/google/cloud/dataproc_v1/proto/autoscaling_policies.proto +++ b/google/cloud/dataproc_v1/proto/autoscaling_policies.proto @@ -170,7 +170,7 @@ message BasicYarnAutoscalingConfig { // aggressive scaling). A scale-up factor closer to 0 will result in a smaller // magnitude of scaling up (less aggressive scaling). // See [How autoscaling - // works](/dataproc/docs/concepts/configuring-clusters/autoscaling#how_autoscaling_works) + // works](https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/autoscaling#how_autoscaling_works) // for more information. // // Bounds: [0.0, 1.0]. @@ -182,7 +182,7 @@ message BasicYarnAutoscalingConfig { // update (more aggressive scaling). A scale-down factor of 0 disables // removing workers, which can be beneficial for autoscaling a single job. // See [How autoscaling - // works](/dataproc/docs/concepts/configuring-clusters/autoscaling#how_autoscaling_works) + // works](https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/autoscaling#how_autoscaling_works) // for more information. // // Bounds: [0.0, 1.0]. diff --git a/google/cloud/dataproc_v1/proto/autoscaling_policies_pb2.py b/google/cloud/dataproc_v1/proto/autoscaling_policies_pb2.py index 597ba982..22c0273a 100644 --- a/google/cloud/dataproc_v1/proto/autoscaling_policies_pb2.py +++ b/google/cloud/dataproc_v1/proto/autoscaling_policies_pb2.py @@ -885,7 +885,8 @@ memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). See `How autoscaling - works `__ for more information. Bounds: [0.0, 1.0]. scale_down_factor: @@ -895,7 +896,8 @@ available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. See `How - autoscaling works `__ for more information. Bounds: [0.0, 1.0]. scale_up_min_worker_fraction: diff --git a/google/cloud/dataproc_v1/proto/clusters_pb2.py b/google/cloud/dataproc_v1/proto/clusters_pb2.py index d23317d5..845ea5a5 100644 --- a/google/cloud/dataproc_v1/proto/clusters_pb2.py +++ b/google/cloud/dataproc_v1/proto/clusters_pb2.py @@ -3590,11 +3590,11 @@ completed. By default, executables are run on master and all worker nodes. You can test a node’s ``role`` metadata to run an executable on a master or worker node, as shown below using - ``curl`` (you can also use ``wget``): :: ROLE=$(curl -H - Metadata-Flavor:Google http://metadata/computeMetadata/v1/i - nstance/attributes/dataproc-role) if [[ "${ROLE}" == - 'Master' ]]; then ... master specific actions ... else - ... worker specific actions ... fi + ``curl`` (you can also use ``wget``): ROLE=\ :math:`(curl -H + Metadata-Flavor:Google http://metadata/computeMetadata/v1/ins + tance/attributes/dataproc-role) if [[ "`\ {ROLE}" == ‘Master’ + ]]; then … master specific actions … else … worker specific + actions … fi encryption_config: Optional. Encryption settings for the cluster. autoscaling_config: diff --git a/synth.metadata b/synth.metadata index 29458e54..6045f07e 100644 --- a/synth.metadata +++ b/synth.metadata @@ -4,14 +4,15 @@ "git": { "name": ".", "remote": "https://github.com/googleapis/python-dataproc.git", - "sha": "29e42dc71aa02e38bf7a5d83cc6a13e8487a48c2" + "sha": "69424f2d7735fece95620520fc83ecf88bde7fbb" } }, { "git": { - "name": "synthtool", - "remote": "https://github.com/googleapis/synthtool.git", - "sha": "5f2f711c91199ba2f609d3f06a2fe22aee4e5be3" + "name": "googleapis", + "remote": "https://github.com/googleapis/googleapis.git", + "sha": "6fd07563a2f1a6785066f5955ad9659a315e4492", + "internalRef": "324941614" } }, {