Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add storage.upload(path) #269

Merged
merged 14 commits into from Jun 25, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
14 changes: 14 additions & 0 deletions google-cloud-storage/clirr-ignored-differences.xml
@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- see http://www.mojohaus.org/clirr-maven-plugin/examples/ignored-differences.html -->
<differences>
<difference>
<differenceType>7012</differenceType>
<className>com/google/cloud/storage/Storage</className>
<method>void upload(*.BlobInfo, java.nio.file.Path, *.Storage$BlobWriteOption[])</method>
</difference>
<difference>
<differenceType>7012</differenceType>
<className>com/google/cloud/storage/Storage</className>
<method>void upload(*.BlobInfo, java.io.InputStream, *.Storage$BlobWriteOption[])</method>
</difference>
</differences>
Expand Up @@ -37,9 +37,11 @@
import com.google.common.collect.Iterables;
import com.google.common.collect.Lists;
import com.google.common.io.BaseEncoding;
import java.io.IOException;
import java.io.InputStream;
import java.io.Serializable;
import java.net.URL;
import java.nio.file.Path;
import java.security.Key;
import java.util.Arrays;
import java.util.Collections;
Expand Down Expand Up @@ -1811,6 +1813,113 @@ Blob create(
@Deprecated
Blob create(BlobInfo blobInfo, InputStream content, BlobWriteOption... options);

/**
* Uploads {@code path} to the blob using {@link #writer}. By default any MD5 and CRC32C values in
* the given {@code blobInfo} are ignored unless requested via the {@link
* BlobWriteOption#md5Match()} and {@link BlobWriteOption#crc32cMatch()} options. Folder upload is
* not supported.
*
* <p>Example of uploading a file:
*
* <pre>{@code
* String bucketName = "my-unique-bucket";
* String fileName = "readme.txt";
* BlobId blobId = BlobId.of(bucketName, fileName);
* BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
* storage.upload(blobInfo, Paths.get(fileName));
* }</pre>
*
* @param blobInfo blob to create
* @param path file to upload
* @param options blob write options
* @throws IOException on I/O error
* @throws StorageException on failure
* @see #upload(BlobInfo, Path, int, BlobWriteOption...)
*/
void upload(BlobInfo blobInfo, Path path, BlobWriteOption... options) throws IOException;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blob should be returned from upload() methods similar to create() methods.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returning Blob requires an extra RPC request to be issued, that is not always needed.
I can add uploadFrom() method to the Blob class (symmetric to downloadTo) to return an updated Blob.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A month ago I compared upload/download implemented in various ways for several types of file. This is the summary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dmitry-fa,

I was thinking about the issue you filed: #149

When the upload is complete GCS responds with object metadata so we don't need to perform an additional get request. flushBuffer doesn't return a value so we'd diverge from the spec a bit to get the metadata.

Copy link
Contributor Author

@dmitry-fa dmitry-fa Apr 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean we can update the StorageRpc interface to make void write(uploadId, ...) to return a StorageObject?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 nit so far, added @JesseLovelace for surface review as well.
Blob should be returned from upload() methods similar to create() methods.

It's not a nit at all :) It's a significant change. I agree with you, upload() should return Blob.
I see the following ways how to implement:

  1. Update the StorageRpc interface and make write() to return a StorageObject instance or null in write is not final request for the given upload. Now StorageRpc .write() returns null.

  2. Update HttpStorageRpc.write() to remember StorageObject and return this object in a separate method.
    In this case BlobWriterChannel.fluchBuffer() will look like:

public void run() {
         StorageRpc  rpc = getOptions().getStorageRpcV1();
         rpc.write(getUploadId(), getBuffer(), 0, getPosition(), length, last);
         if (last && rpc instnceof HttpStorageRpc) {
             this.uploadedBlob = ((HttpStorageRpc)rpc.getUplodedBlob())
         } else {
             this.uploadedBlob = null;  // let storage.upload() to resolve it
         }
}
  1. Issue an extra rpc per upload to return a Blob

I believe 2. is not a Google way, so we should go 1. As a temporary solution 3. is also considered, because it could be improved to 1.

WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @frankyn,

When the upload is complete GCS responds with object metadata so we don't need to perform an additional get request.

I managed to implement upload based on your hint. I modified StorageRpc.write() to return StorageObject instead of void.
Two issues were found:

  • GCS does not respond with object metadata if writer was created by storage.writer(signedUrl)
    Not sure if it's a bug or a feature, looks like a bug: spec says nothing about it. I worked it around.

  • Change StrorageRpc affects storage-nio, Kokoro - Test: Linkage Monitor fails:
    (com.google.cloud:google-cloud-nio:0.120.0-alpha) com.google.cloud.storage.contrib.nio.testing.FakeStorageRpc's method write(String arg1, byte[] arg2, int arg3, long arg4, int arg6, boolean arg7) is not implemented in the class referenced from com.google.cloud.storage.spi.v1.StorageRpc (com.google.cloud:google-cloud-storage:1.107.1-SNAPSHOT)

cc: @JesseLovelace


/**
* Uploads {@code path} to the blob using {@link #writer} and {@code bufferSize}. By default any
* MD5 and CRC32C values in the given {@code blobInfo} are ignored unless requested via the {@link
* BlobWriteOption#md5Match()} and {@link BlobWriteOption#crc32cMatch()} options. Folder upload is
* not supported.
*
* <p>{@link #upload(BlobInfo, Path, BlobWriteOption...)} invokes this method with a buffer size
* of 15 MiB. Users can pass alternative values. Larger buffer sizes might improve the upload
* performance but require more memory. This can cause an OutOfMemoryError or add significant
* garbage collection overhead. Smaller buffer sizes reduce memory consumption, that is noticeable
* when uploading many objects in parallel. Buffer sizes less than 256 KiB are treated as 256 KiB.
*
* <p>Example of uploading a humongous file:
*
* <pre>{@code
* BlobId blobId = BlobId.of(bucketName, blobName);
* BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("video/webm").build();
*
* int largeBufferSize = 150 * 1024 * 1024;
* Path file = Paths.get("humongous.file");
* storage.upload(blobInfo, file, largeBufferSize);
* }</pre>
*
* @param blobInfo blob to create
* @param path file to upload
* @param bufferSize size of the buffer I/O operations
* @param options blob write options
* @throws IOException on I/O error
* @throws StorageException on failure
*/
void upload(BlobInfo blobInfo, Path path, int bufferSize, BlobWriteOption... options)
throws IOException;

/**
* Reads bytes from an input stream and uploads those bytes to the blob using {@link #writer}. By
* default any MD5 and CRC32C values in the given {@code blobInfo} are ignored unless requested
* via the {@link BlobWriteOption#md5Match()} and {@link BlobWriteOption#crc32cMatch()} options.
*
* <p>Example of uploading data with CRC32C checksum:
*
* <pre>{@code
* BlobId blobId = BlobId.of(bucketName, blobName);
* byte[] content = "Hello, world".getBytes(StandardCharsets.UTF_8);
* Hasher hasher = Hashing.crc32c().newHasher().putBytes(content);
* String crc32c = BaseEncoding.base64().encode(Ints.toByteArray(hasher.hash().asInt()));
* BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setCrc32c(crc32c).build();
* storage.upload(blobInfo, new ByteArrayInputStream(content), Storage.BlobWriteOption.crc32cMatch());
* }</pre>
*
* @param blobInfo blob to create
* @param content input stream to read from
* @param options blob write options
* @throws IOException on I/O error
* @throws StorageException on failure
* @see #upload(BlobInfo, InputStream, int, BlobWriteOption...)
*/
void upload(BlobInfo blobInfo, InputStream content, BlobWriteOption... options)
throws IOException;

/**
* Reads bytes from an input stream and uploads those bytes to the blob using {@link #writer} and
* {@code bufferSize}. By default any MD5 and CRC32C values in the given {@code blobInfo} are
* ignored unless requested via the {@link BlobWriteOption#md5Match()} and {@link
* BlobWriteOption#crc32cMatch()} options.
*
* <p>{@link #upload(BlobInfo, InputStream, BlobWriteOption...)} )} invokes this method with a
* buffer size of 15 MiB. Users can pass alternative values. Larger buffer sizes might improve the
* upload performance but require more memory. This can cause an OutOfMemoryError or add
* significant garbage collection overhead. Smaller buffer sizes reduce memory consumption, that
* is noticeable when uploading many objects in parallel. Buffer sizes less than 256 KiB are
* treated as 256 KiB.
*
* @param blobInfo blob to create
* @param content input stream to read from
* @param bufferSize size of the buffer I/O operations
* @param options blob write options
* @throws IOException on I/O error
* @throws StorageException on failure
dmitry-fa marked this conversation as resolved.
Show resolved Hide resolved
*/
void upload(BlobInfo blobInfo, InputStream content, int bufferSize, BlobWriteOption... options)
throws IOException;

/**
* Returns the requested bucket or {@code null} if not found.
*
Expand Down
Expand Up @@ -48,6 +48,7 @@
import com.google.cloud.ReadChannel;
import com.google.cloud.RetryHelper.RetryHelperException;
import com.google.cloud.Tuple;
import com.google.cloud.WriteChannel;
import com.google.cloud.storage.Acl.Entity;
import com.google.cloud.storage.HmacKey.HmacKeyMetadata;
import com.google.cloud.storage.spi.v1.StorageRpc;
Expand All @@ -66,12 +67,18 @@
import com.google.common.io.BaseEncoding;
import com.google.common.primitives.Ints;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.net.MalformedURLException;
import java.net.URI;
import java.net.URL;
import java.net.URLEncoder;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Arrays;
import java.util.Collections;
import java.util.EnumMap;
Expand All @@ -92,6 +99,9 @@ final class StorageImpl extends BaseService<StorageOptions> implements Storage {

private static final String STORAGE_XML_URI_HOST_NAME = "storage.googleapis.com";

private static final int DEFAULT_BUFFER_SIZE = 15 * 1024 * 1024;
private static final int MIN_BUFFER_SIZE = 256 * 1024;

private static final Function<Tuple<Storage, Boolean>, Boolean> DELETE_FUNCTION =
new Function<Tuple<Storage, Boolean>, Boolean>() {
@Override
Expand Down Expand Up @@ -204,6 +214,54 @@ public StorageObject call() {
}
}

@Override
public void upload(BlobInfo blobInfo, Path path, BlobWriteOption... options) throws IOException {
upload(blobInfo, path, DEFAULT_BUFFER_SIZE, options);
}

@Override
public void upload(BlobInfo blobInfo, Path path, int bufferSize, BlobWriteOption... options)
throws IOException {
if (Files.isDirectory(path)) {
throw new StorageException(0, path + " is a directory");
}
try (InputStream input = Files.newInputStream(path)) {
upload(blobInfo, input, bufferSize, options);
}
}

@Override
public void upload(BlobInfo blobInfo, InputStream content, BlobWriteOption... options)
throws IOException {
upload(blobInfo, content, DEFAULT_BUFFER_SIZE, options);
}

@Override
public void upload(
BlobInfo blobInfo, InputStream content, int bufferSize, BlobWriteOption... options)
throws IOException {
try (WriteChannel writer = writer(blobInfo, options)) {
upload(Channels.newChannel(content), writer, bufferSize);
}
}

/*
* Uploads the given content to the storage using specified write channel and the given buffer
* size. This method does not close any channels.
*/
private static void upload(ReadableByteChannel reader, WriteChannel writer, int bufferSize)
dmitry-fa marked this conversation as resolved.
Show resolved Hide resolved
throws IOException {
bufferSize = Math.max(bufferSize, MIN_BUFFER_SIZE);
ByteBuffer buffer = ByteBuffer.allocate(bufferSize);
writer.setChunkSize(bufferSize);

while (reader.read(buffer) >= 0) {
buffer.flip();
writer.write(buffer);
buffer.clear();
}
}

@Override
public Bucket get(String bucket, BucketGetOption... options) {
final com.google.api.services.storage.model.Bucket bucketPb = BucketInfo.of(bucket).toPb();
Expand Down