Skip to content

Commit

Permalink
fix(storagetransfer): update the API
Browse files Browse the repository at this point in the history
#### storagetransfer:v1

The following keys were changed:
- schemas.AwsS3Data.properties.roleArn.description
- schemas.ErrorSummary.properties.errorLogEntries.description
- schemas.HttpData.description
- schemas.NotificationConfig.description
- schemas.NotificationConfig.properties.pubsubTopic.description
- schemas.ObjectConditions.description
- schemas.ObjectConditions.properties.lastModifiedBefore.description
- schemas.Schedule.properties.scheduleEndDate.description
- schemas.Schedule.properties.scheduleStartDate.description
- schemas.TransferJob.properties.name.description
- schemas.TransferJob.properties.schedule.description
- schemas.TransferJob.properties.status.enumDescriptions
- schemas.TransferOptions.properties.overwriteObjectsAlreadyExistingInSink.description
  • Loading branch information
yoshi-automation authored and JustinBeckwith committed Jul 8, 2021
1 parent f8c1b32 commit a09f4f3
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 27 deletions.
30 changes: 15 additions & 15 deletions discovery/storagetransfer-v1.json
Expand Up @@ -434,7 +434,7 @@
}
}
},
"revision": "20210624",
"revision": "20210701",
"rootUrl": "https://storagetransfer.googleapis.com/",
"schemas": {
"AwsAccessKey": {
Expand Down Expand Up @@ -469,7 +469,7 @@
"type": "string"
},
"roleArn": {
"description": "Input only. Role arn to support temporary credentials via AssumeRoleWithWebIdentity. When role arn is provided, transfer service will fetch temporary credentials for the session using AssumeRoleWithWebIdentity call for the provided role using the [GoogleServiceAccount] for this project.",
"description": "Input only. The Amazon Resource Name (ARN) of the role to support temporary credentials via `AssumeRoleWithWebIdentity`. For more information about ARNs, see [IAM ARNs](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns). When a role ARN is provided, Transfer Service fetches temporary credentials for the session using a `AssumeRoleWithWebIdentity` call for the provided role using the GoogleServiceAccount for this project.",
"type": "string"
}
},
Expand Down Expand Up @@ -613,7 +613,7 @@
"type": "string"
},
"errorLogEntries": {
"description": "Error samples. At most 5 error log entries will be recorded for a given error code for a single transfer operation.",
"description": "Error samples. At most 5 error log entries are recorded for a given error code for a single transfer operation.",
"items": {
"$ref": "ErrorLogEntry"
},
Expand Down Expand Up @@ -653,7 +653,7 @@
"type": "object"
},
"HttpData": {
"description": "An HttpData resource specifies a list of objects on the web to be transferred over HTTP. The information of the objects to be transferred is contained in a file referenced by a URL. The first line in the file must be `\"TsvHttpData-1.0\"`, which specifies the format of the file. Subsequent lines specify the information of the list of objects, one object per list entry. Each entry has the following tab-delimited fields: * **HTTP URL** — The location of the object. * **Length** — The size of the object in bytes. * **MD5** — The base64-encoded MD5 hash of the object. For an example of a valid TSV file, see [Transferring data from URLs](https://cloud.google.com/storage-transfer/docs/create-url-list). When transferring data based on a URL list, keep the following in mind: * When an object located at `http(s)://hostname:port/` is transferred to a data sink, the name of the object at the data sink is `/`. * If the specified size of an object does not match the actual size of the object fetched, the object will not be transferred. * If the specified MD5 does not match the MD5 computed from the transferred bytes, the object transfer will fail. * Ensure that each URL you specify is publicly accessible. For example, in Cloud Storage you can [share an object publicly] (/storage/docs/cloud-console#_sharingdata) and get a link to it. * Storage Transfer Service obeys `robots.txt` rules and requires the source HTTP server to support `Range` requests and to return a `Content-Length` header in each response. * ObjectConditions have no effect when filtering objects to transfer.",
"description": "An HttpData resource specifies a list of objects on the web to be transferred over HTTP. The information of the objects to be transferred is contained in a file referenced by a URL. The first line in the file must be `\"TsvHttpData-1.0\"`, which specifies the format of the file. Subsequent lines specify the information of the list of objects, one object per list entry. Each entry has the following tab-delimited fields: * **HTTP URL** — The location of the object. * **Length** — The size of the object in bytes. * **MD5** — The base64-encoded MD5 hash of the object. For an example of a valid TSV file, see [Transferring data from URLs](https://cloud.google.com/storage-transfer/docs/create-url-list). When transferring data based on a URL list, keep the following in mind: * When an object located at `http(s)://hostname:port/` is transferred to a data sink, the name of the object at the data sink is `/`. * If the specified size of an object does not match the actual size of the object fetched, the object is not transferred. * If the specified MD5 does not match the MD5 computed from the transferred bytes, the object transfer fails. * Ensure that each URL you specify is publicly accessible. For example, in Cloud Storage you can [share an object publicly] (/storage/docs/cloud-console#_sharingdata) and get a link to it. * Storage Transfer Service obeys `robots.txt` rules and requires the source HTTP server to support `Range` requests and to return a `Content-Length` header in each response. * ObjectConditions have no effect when filtering objects to transfer.",
"id": "HttpData",
"properties": {
"listUrl": {
Expand Down Expand Up @@ -700,7 +700,7 @@
"type": "object"
},
"NotificationConfig": {
"description": "Specification to configure notifications published to Cloud Pub/Sub. Notifications will be published to the customer-provided topic using the following `PubsubMessage.attributes`: * `\"eventType\"`: one of the EventType values * `\"payloadFormat\"`: one of the PayloadFormat values * `\"projectId\"`: the project_id of the `TransferOperation` * `\"transferJobName\"`: the transfer_job_name of the `TransferOperation` * `\"transferOperationName\"`: the name of the `TransferOperation` The `PubsubMessage.data` will contain a TransferOperation resource formatted according to the specified `PayloadFormat`.",
"description": "Specification to configure notifications published to Pub/Sub. Notifications are published to the customer-provided topic using the following `PubsubMessage.attributes`: * `\"eventType\"`: one of the EventType values * `\"payloadFormat\"`: one of the PayloadFormat values * `\"projectId\"`: the project_id of the `TransferOperation` * `\"transferJobName\"`: the transfer_job_name of the `TransferOperation` * `\"transferOperationName\"`: the name of the `TransferOperation` The `PubsubMessage.data` contains a TransferOperation resource formatted according to the specified `PayloadFormat`.",
"id": "NotificationConfig",
"properties": {
"eventTypes": {
Expand Down Expand Up @@ -737,14 +737,14 @@
"type": "string"
},
"pubsubTopic": {
"description": "Required. The `Topic.name` of the Cloud Pub/Sub topic to which to publish notifications. Must be of the format: `projects/{project}/topics/{topic}`. Not matching this format will result in an INVALID_ARGUMENT error.",
"description": "Required. The `Topic.name` of the Pub/Sub topic to which to publish notifications. Must be of the format: `projects/{project}/topics/{topic}`. Not matching this format results in an INVALID_ARGUMENT error.",
"type": "string"
}
},
"type": "object"
},
"ObjectConditions": {
"description": "Conditions that determine which objects will be transferred. Applies only to Cloud Data Sources such as S3, Azure, and Cloud Storage. The \"last modification time\" refers to the time of the last change to the object's content or metadata — specifically, this is the `updated` property of Cloud Storage objects, the `LastModified` field of S3 objects, and the `Last-Modified` header of Azure blobs.",
"description": "Conditions that determine which objects are transferred. Applies only to Cloud Data Sources such as S3, Azure, and Cloud Storage. The \"last modification time\" refers to the time of the last change to the object's content or metadata — specifically, this is the `updated` property of Cloud Storage objects, the `LastModified` field of S3 objects, and the `Last-Modified` header of Azure blobs.",
"id": "ObjectConditions",
"properties": {
"excludePrefixes": {
Expand All @@ -762,7 +762,7 @@
"type": "array"
},
"lastModifiedBefore": {
"description": "If specified, only objects with a \"last modification time\" before this timestamp and objects that don't have a \"last modification time\" will be transferred.",
"description": "If specified, only objects with a \"last modification time\" before this timestamp and objects that don't have a \"last modification time\" are transferred.",
"format": "google-datetime",
"type": "string"
},
Expand Down Expand Up @@ -857,11 +857,11 @@
},
"scheduleEndDate": {
"$ref": "Date",
"description": "The last day a transfer runs. Date boundaries are determined relative to UTC time. A job will run once per 24 hours within the following guidelines: * If `schedule_end_date` and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * If `schedule_end_date` is later than `schedule_start_date` and `schedule_end_date` is in the future relative to UTC, the job will run each day at start_time_of_day through `schedule_end_date`."
"description": "The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If `schedule_end_date` and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * If `schedule_end_date` is later than `schedule_start_date` and `schedule_end_date` is in the future relative to UTC, the job runs each day at start_time_of_day through `schedule_end_date`."
},
"scheduleStartDate": {
"$ref": "Date",
"description": "Required. The start date of a transfer. Date boundaries are determined relative to UTC time. If `schedule_start_date` and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. **Note:** When starting jobs at or near midnight UTC it is possible that a job will start later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it will create a TransferJob with `schedule_start_date` set to June 2 and a `start_time_of_day` set to midnight UTC. The first scheduled TransferOperation will take place on June 3 at midnight UTC."
"description": "Required. The start date of a transfer. Date boundaries are determined relative to UTC time. If `schedule_start_date` and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. **Note:** When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob with `schedule_start_date` set to June 2 and a `start_time_of_day` set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC."
},
"startTimeOfDay": {
"$ref": "TimeOfDay",
Expand Down Expand Up @@ -1042,7 +1042,7 @@
"type": "string"
},
"name": {
"description": "A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service will assign a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with `\"transferJobs/\"` prefix and end with a letter or a number, and should be no more than 128 characters. This name must not start with 'transferJobs/OPI'. 'transferJobs/OPI' is a reserved prefix. Example: `\"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$\"` Invalid job names will fail with an INVALID_ARGUMENT error.",
"description": "A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with `\"transferJobs/\"` prefix and end with a letter or a number, and should be no more than 128 characters. This name must not start with 'transferJobs/OPI'. 'transferJobs/OPI' is a reserved prefix. Example: `\"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$\"` Invalid job names fail with an INVALID_ARGUMENT error.",
"type": "string"
},
"notificationConfig": {
Expand All @@ -1055,7 +1055,7 @@
},
"schedule": {
"$ref": "Schedule",
"description": "Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job will never execute a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule."
"description": "Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule."
},
"status": {
"description": "Status of the job. This value MUST be specified for `CreateTransferJobRequests`. **Note:** The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.",
Expand All @@ -1067,8 +1067,8 @@
],
"enumDescriptions": [
"Zero is an illegal value.",
"New transfers will be performed based on the schedule.",
"New transfers will not be scheduled.",
"New transfers are performed based on the schedule.",
"New transfers are not scheduled.",
"This is a soft delete state. After a transfer job is set to this state, the job and all the transfer executions are subject to garbage collection. Transfer jobs become eligible for garbage collection 30 days after their status is set to `DELETED`."
],
"type": "string"
Expand Down Expand Up @@ -1163,7 +1163,7 @@
"type": "boolean"
},
"overwriteObjectsAlreadyExistingInSink": {
"description": "When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source will be overwritten with the source object.",
"description": "When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.",
"type": "boolean"
}
},
Expand Down

0 comments on commit a09f4f3

Please sign in to comment.