Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Backend Client Multi Part Upload not finding uploaded parts #4329

Open
lklammtMW opened this issue Mar 8, 2022 · 0 comments
Open

Comments

@lklammtMW
Copy link

Hey,

I got a question concerning the multi part upload for a custom backend client and hope someone here can help.

I want to assemble the user uploaded parts in the zenko cloudserver and send them to my backend as one file. Sadly I am getting an error creating the filteredPartsObj in the completeMPU method. The issue is, that the storedParts array in the mdInfo object, passed to my completeMPU method is empty. The jsonList object passed to my completeMPU contains the uploaded parts ETag and part number correctly.

I am able to assemble the parts in the completeMPU method and send them to my backend. But I am still getting an error because the storedParts could not be found and the zenko cloudserver can't complete correctly.

This is the the printed filteredPartsObj after calling myfilteredPartsObj = validateAndFilterMpuParts(storedParts.Contents, jsonList, mpuOverviewKey, splitter, log); in my completeMPU method (see below):
{"partList":[],"error":{"code":400,"description":"One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.","InvalidPart":true}}

My clients uploadPart method:

uploadPart(request, streamingV4Params, stream, size, key, uploadId, partNumber, bucketName, log, callback){
      let hashedStream = stream;
      if (request) {
          const partStream = prepareStream(request, streamingV4Params,
              log, callback);
          hashedStream = new MD5Sum();
          partStream.pipe(hashedStream);
      }
      let file = fs.createWriteStream(key+partNumber);
      hashedStream.pipe(file);
      let globKey = key;
      file.on('finish', () => {
        const dataRetrievalInfo = {
              key: bucketName+'/'+key,
              dataStoreETag: md5File.sync(path.resolve(globKey+partNumber)),
              dataStoreType: 'mystore',
              dataStoreName: 'mystore'
        };
       return callback(null, dataRetrievalInfo);
      });
  }

My clients completeMPU method:

completeMPU(jsonList, mdInfo, key, uploadId, bucketName, log, callback) {
      let filteredPartsObj;    
      const { storedParts, mpuOverviewKey, splitter } = mdInfo;
      filteredPartsObj = validateAndFilterMpuParts(storedParts.Contents, jsonList,
           mpuOverviewKey, splitter, log);
       const files = fs.readdirSync('.').filter(a => a.startsWith(key));
       const parts = [];
       for(const file of files)
             parts.push(fs.readFileSync(file));

        fs.writeFileSync(key, Buffer.concat(parts));
        let rs = fs.readFileSync(key);

        fetch(`${this.endpoint}/object/${bucketName}/${key}`, { headers: {'Authorization': 'Bearer SOMETHINGHARDCODED'}, method: 'PUT', body: rs }).then((response) => {
              if(response.status === 200) {
                 const completeObjData = {
                    key: bucketName+'/'+key,
                   //add in filteredObj here,
                    eTag: this._getEtag(jsonList.Part) 
                  };
                return callback(null, completeObjData);
              }
              else {
                return callback(this._getError(response.status));
              }
        });
    }
  }

_getEtag(partList){
    let eTag='';
    let concat='';
    partList.forEach(partObj => {
      concat+=partObj.ETag[0];
    });
    eTag=crypto.createHash('md5').update(concat).digest("hex");
    eTag+='-'+partList.length;
    return eTag;
  }

Am I missing something? Are the storedParts retrieved from the ShadowBucket? If so, what exactly is stored in the ShadowBucket and do i need to write to it myself or is the ShadowBucket completely handled by the existing logic of the zenko cloudserver?

Hope someone can help. Thanks in advance! :)

Kind regards
Lukas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant