Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 5.0.0 #7940

Closed
67 of 71 tasks
ScharfViktor opened this issue Dec 12, 2023 · 54 comments
Closed
67 of 71 tasks

Release 5.0.0 #7940

ScharfViktor opened this issue Dec 12, 2023 · 54 comments

Comments

@ScharfViktor
Copy link
Contributor

ScharfViktor commented Dec 12, 2023

Beta Phase

  • DEV/QA: Kickoff meeting [Kickoff meeting] (https://confluence.owncloud.com/display/QA/Release+5.0.0+Overview)
  • DEV/QA: Define client versions and provide list of breaking changes for desktop/mobile team Release 5.0.0 #7940 (comment)
  • DEV/QA: Check new strings and align with clients @tbsbdr @TheOneRing FYI
  • DEV/DOCS: Create list of pending docks tasks @kobergj @mmattel @dragonchaser
  • DEV: Create branch release-5.0.0-beta.1 -> FEATURE FREEZE
    • DEV: bump ocis version in necessary files
    • DEV: changelog/CHANGELOG.tmpl
    • DEV: ocis-pkg/version/version.go
    • DEV: sonar-project.properties
    • DEV: prepare changelog folder in changelog/5.0.0-beta.1_????_??_??
  • DEV: Check successful CI run on release branch
  • DEV: Create signed tag 5.0.0-beta.1
  • DEV: Check successful CI run on 5.0.0-beta.1 tag / BLOCKING for all further activity
  • DEV: Merge back beta release branch
  • DEV: https://ocis.team.owncloud.works/
    • DEV: needs snapshot and migration

QA Phase

Extra QA Topics

Standard QA

After QA Phase

  • Brief company-wide heads up via mail @tbsbdr
  • DEV: Create list of changed ENV vars and send to release-coordination@owncloud.com [docs] Changed ENV vars from code to complete list #8192
    • Variable Name
    • Introduced in version
    • Default Value
    • Description
    • dependencies with user other components
  • DEV: Create branch release-x.x.x [full-ci] Release 5.0.0 #8679
    • DEV: bump ocis version in necessary files
    • DEV: ocis-pkg/version/version.go
    • DEV: sonar-project.properties
    • DEV: released deployment versions
    • DEV: prepare changelog folder in changelog/x.x.x_???
  • Release Notes + Breaking Changes @tbsbdr
  • Migration + Breaking Changes Admin Doc @mmattel
  • Migration + Breaking Changes Helm Chart Doc @wkloucek @mmattel
  • DEV: Create final signed tag
  • DEV: Check successful CI run on vx.y.z tag / BLOCKING for all further activity
  • Merge release notes

Post-release communication

  • DEV: Create a docs-stable-x.y branch based on the docs folder in the ocis repo @micbar
  • DEV: Create a x.y.z release in the ocis-helm repo (frozen state) @wkloucek
  • DEV/QA: Ping documentation in RC about the new release tag (for ocis/helm chart version bump in docs)
  • DEV/QA: Ping marketing to update all download links (download mirrors are updated at the full hour, wait with ping until download is actually available)
  • DEV/QA: Ping @hodyroff once the demo instances are running this release
  • DEV/QA: notify @michaelstingl @hosy @fmoc @jesmrec to publish client finals
  • DEV: Merge back release branch
  • DEV: Create stable-x.y branch in the ocis repo from final tag
@wkloucek
Copy link
Contributor

Are there gonna be loadtests as part of the release?

@ScharfViktor
Copy link
Contributor Author

Are there gonna be loadtests as part of the release?

yes, I want to do it

@micbar
Copy link
Contributor

micbar commented Dec 12, 2023

Are there gonna be loadtests as part of the release?

yes, I want to do it

Please test 5.0.0-beta.1 with

  1. Redis Cache
  2. Nats used also as cache

and compare the outcome.

@micbar
Copy link
Contributor

micbar commented Dec 12, 2023

@2403905 2403905 unpinned this issue Dec 13, 2023
@ScharfViktor ScharfViktor pinned this issue Dec 14, 2023
@micbar
Copy link
Contributor

micbar commented Dec 15, 2023

Changelog

Changelog for v5.0.0-beta.1

Summary which could affect clients

@Salipa-Gurung
Copy link
Contributor

fix confirmed in desktop client (5.2.0) #7118 (comment)

@saw-jan
Copy link
Member

saw-jan commented Dec 18, 2023

Ubuntu22.04 (desktop-client 5.2.0):

  • .url files get synced
  • no application to open it

Windows 11:

  • .url files get synced - extension is not visible (expected)
  • links open in the respective app (browser, mail)

MacOs: @HanaGemela Could you do it in mac?

@ScharfViktor Any other particular things (or OS) to test here?

@michaelstingl
Copy link
Contributor

michaelstingl commented Dec 18, 2023

Ubuntu22.04 (desktop-client 5.2.0):

  • .url files get synced
  • no application to open it

Please open client docs issue. We should document this Ubuntu limitation and possible workarounds.

@saw-jan
Copy link
Member

saw-jan commented Dec 18, 2023

  • Bugfix cs3org/reva#4302: Fix checking of filename length
    Should be fixed now, needs confirmation on Desktop (deep path)

File is able to sync to the server ✔️
File path: /Деснол/Разработка регламентов/Управление пользователями ИТ инфраструктуры компании/2023-05-15 Управление пользователями ИТ инфраструктуры компании.docx
Characters: 152
Bytes: 275

23-12-18 17:24:46:661 [ info sync.httplogger ]:	"e4969f87-648d-48a7-8476-a57b161bd067: Request: POST https://192.168.56.1:9200/dav/spaces/14e8cce0-4e36-414d-82e4-b3f0dd049863$56a9c34d-65da-4386-bb3c-89d2d441e0f5 Header: { X-OC-Mtime: 1702899112, Content-Type: application/offset+octet-stream, Content-Length: 13, Upload-Offset: 0, Tus-Resumable: 1.0.0, Upload-Metadata: filename L9CU0LXRgdC90L7Quy/QoNCw0LfRgNCw0LHQvtGC0LrQsCDRgNC10LPQu9Cw0LzQtdC90YLQvtCyL9Cj0L/RgNCw0LLQu9C10L3QuNC1INC/0L7Qu9GM0LfQvtCy0LDRgtC10LvRj9C80Lgg0JjQoiDQuNC90YTRgNCw0YHRgtGA0YPQutGC0YPRgNGLINC60L7QvNC/0LDQvdC40LgvMjAyMy0wNS0xNSDQo9C/0YDQsNCy0LvQtdC90LjQtSDQv9C+0LvRjNC30L7QstCw0YLQtdC70Y/QvNC4INCY0KIg0LjQvdGE0YDQsNGB0YLRgNGD0LrRgtGD0YDRiyDQutC+0LzQv9Cw0L3QuNC4LmRvY3g=,checksum U0hBMSBlNWRkODU1YzliN2E4NDI0NTNiYTY1ZGIxNmY5OWFjMzEwNDg2M2E2,mtime MTcwMjg5OTExMg==, Upload-Length: 13, Authorization: Bearer [redacted], User-Agent: Mozilla/5.0 (Linux) mirall/5.2.0.12726 (ownCloud, ubuntu-6.2.0-39-generic ClientArchitecture: x86_64 OsArchitecture: x86_64), Accept: */*, Accept-Language: en_US, X-Request-ID: e4969f87-648d-48a7-8476-a57b161bd067, Original-Request-ID: e4969f87-648d-48a7-8476-a57b161bd067, } Data: [13 bytes of application/offset+octet-stream data]"
23-12-18 17:24:47:037 [ info sync.httplogger ]:	"e4969f87-648d-48a7-8476-a57b161bd067: Response: POST 201 (375ms) https://192.168.56.1:9200/dav/spaces/14e8cce0-4e36-414d-82e4-b3f0dd049863$56a9c34d-65da-4386-bb3c-89d2d441e0f5 Header: { Access-Control-Allow-Headers: Tus-Resumable, Upload-Length, Upload-Metadata, If-Match, Access-Control-Allow-Origin: *, Access-Control-Expose-Headers: Tus-Resumable, Upload-Offset, Location, Content-Length: 0, Content-Security-Policy: default-src 'none';, Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document, Date: Mon, 18 Dec 2023 11:39:47 GMT, Etag: \"f4c75fd8272dffa3b71be8da0ea033fd\", Last-Modified: Mon, 18 Dec 2023 11:31:52 +0000, Location: https://192.168.56.1:9200/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNzAyOTg1OTg2LCJpYXQiOjE3MDI4OTk1ODYsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1OC9kYXRhL3R1cy9iMDVmZTk3MS05MjkwLTRiMDAtYWYxYS00YzM5M2M3ODJhYjIifQ.SUjjppqVUBL9fdCuNxn5A1hE8O6djDMalB_gZjLdvds, Oc-Etag: \"f4c75fd8272dffa3b71be8da0ea033fd\", Oc-Fileid: 14e8cce0-4e36-414d-82e4-b3f0dd049863$56a9c34d-65da-4386-bb3c-89d2d441e0f5!0505da5c-5802-4b7a-93ef-c26e6407d2a9, Oc-Perm: RDNVWZP, Tus-Extension: creation,creation-with-upload,checksum,expiration, Tus-Resumable: 1.0.0, Upload-Expires: 1702985986, Upload-Offset: 13, Vary: Origin, X-Content-Type-Options: nosniff, X-Download-Options: noopen, X-Frame-Options: SAMEORIGIN, X-Permitted-Cross-Domain-Policies: none, X-Request-Id: e4969f87-648d-48a7-8476-a57b161bd067, X-Robots-Tag: none, X-Xss-Protection: 1; mode=block, } Data: []"

@Salipa-Gurung
Copy link
Contributor

@Salipa-Gurung
Copy link
Contributor

desktop client can see the etag in the empty shares jail.

23-12-19 01:45:48:657 [ info sync.httplogger ]:	"7373858b-f79e-4fa6-8a82-662f04404214: Request: GET https://192.168.56.1:9200/graph/v1.0/me/drives Header: { Authorization: Bearer [redacted], User-Agent: Mozilla/5.0 (Windows) mirall/5.2.0.12726 (ownCloud, windows-10.0.19045 ClientArchitecture: x86_64 OsArchitecture: x86_64), Accept: */*, Accept-Language: en_US, X-Request-ID: 7373858b-f79e-4fa6-8a82-662f04404214, Original-Request-ID: 7373858b-f79e-4fa6-8a82-662f04404214, } Data: []"
23-12-19 01:45:48:672 [ info sync.httplogger ]:	"7373858b-f79e-4fa6-8a82-662f04404214: Response: GET 200 (14ms) https://192.168.56.1:9200/graph/v1.0/me/drives Header: { Content-Length: 1246, Content-Security-Policy: frame-ancestors 'none', Content-Type: application/json, Date: Tue, 19 Dec 2023 09:45:48 GMT, Vary: Origin, X-Content-Type-Options: nosniff, X-Frame-Options: DENY, X-Graph-Version: 5.0.0-beta.1, X-Request-Id: 7373858b-f79e-4fa6-8a82-662f04404214, } Data: [{\"value\":[{\"driveAlias\":\"personal/admin\",\"driveType\":\"personal\",\"id\":\"62b9dcef-7d2b-4aa8-87a3-d632c394e8c7$859d6bd8-9543-4b28-97c7-c45c3f86de3a\",\"lastModifiedDateTime\":\"2023-12-19T08:53:06.89559402Z\",\"name\":\"Admin\",\"owner\":{\"user\":{\"displayName\":\"\",\"id\":\"859d6bd8-9543-4b28-97c7-c45c3f86de3a\"}},\"quota\":{\"remaining\":54950137856,\"state\":\"normal\",\"total\":0,\"used\":11107},\"root\":{\"eTag\":\"\\\"974827309a4ac5369c0079f270b98651\\\"\",\"id\":\"62b9dcef-7d2b-4aa8-87a3-d632c394e8c7$859d6bd8-9543-4b28-97c7-c45c3f86de3a\",\"webDavUrl\":\"https://192.168.56.1:9200/dav/spaces/62b9dcef-7d2b-4aa8-87a3-d632c394e8c7$859d6bd8-9543-4b28-97c7-c45c3f86de3a\"},\"webUrl\":\"https://192.168.56.1:9200/f/62b9dcef-7d2b-4aa8-87a3-d632c394e8c7$859d6bd8-9543-4b28-97c7-c45c3f86de3a\"},{\"driveAlias\":\"virtual/shares\",\"driveType\":\"virtual\",\"id\":\"a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668\",\"name\":\"Shares\",\"root\":{\"eTag\":\"DECAFC00FEE\",\"id\":\"a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668\",\"webDavUrl\":\"https://192.168.56.1:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668\"},\"webUrl\":\"https://192.168.56.1:9200/f/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668\"}]}\n]"

@saw-jan
Copy link
Member

saw-jan commented Dec 19, 2023

(desktop) GUI automated tests can pass with 5.0.0-beta.1. If it can be considered as confirmation.
Build: https://drone.owncloud.com/owncloud/client/17342
GUI test cases: https://cache.owncloud.com/public/owncloud/client/17342/ocis/guiReportUpload/index.html

CC @HanaGemela

@saw-jan
Copy link
Member

saw-jan commented Dec 20, 2023

(desktop):

  • oc10: allows move between shares
  • ocis-5.0.0-beta.1: move is not possible (file is not synced - blacklisted)
Screenshot from 2023-12-20 17-20-17 Screenshot from 2023-12-20 17-20-32 Screenshot from 2023-12-20 17-20-00

@michaelstingl
Copy link
Contributor

oc10: allows move between shares

Compare with “read only” target share.

@ScharfViktor
Copy link
Contributor Author

ScharfViktor commented Dec 20, 2023

K6 results on intel test machine

ocis - 5.0.0-rc.1 and 5.0.0-rc.2

envs:

k6-env.txt

## envs ``` ```

Overview

value 4.0.0. 5.0.0-rc.1 5.0.0-rc.2
koko-platform-020-navigate-file-tree-ramping-k6.js
http_req_duration(95) 117.48ms 122.8ms
http_req_failed 0.00% 0.00%
http_req_waiting(95) 117.36ms 122.7ms
koko-platform-040-create-upload-rename-delete-folder-and-file-ramping-k6.js
http_req_duration(95) 6.91s 6.91s
http_req_failed 0.01% 0.00%
http_req_waiting(95) 516.65ms 531.58ms
koko-platform-050-download-ramping-k6.js
http_req_duration(95) 15.42s 16.47s
http_req_failed 1.20% 0.02%
http_req_waiting(95) 177.81ms 163.66ms
koko-platform-070-user-group-search-ramping-k6.js
http_req_duration(95) 72.68ms 85.01ms
http_req_failed 0.00% 0.00%
http_req_waiting(95) 72.59ms 84.93ms
koko-platform-080-create-space-ramping-k6.js
http_req_duration(95) 74.11ms 152.47ms 88.22ms
http_req_failed 0.00% 80.44% 0.00%
http_req_waiting(95) 74ms 75.17ms 88.14ms
koko-platform-090-create-remove-group-share-ramping-k6.js
http_req_duration(95) 157.05ms 3.9s 125.76ms
http_req_failed 0.00% 0.00% 0.00%
http_req_waiting(95) 156.96ms 3.9s 125.66m
koko-platform-100-add-remove-tag-ramping-k6.js
http_req_duration(95) 70ms 97.08ms
http_req_failed 0.02% 0.01%
http_req_waiting(95) 69.93ms 97ms
koko-platform-110-sync-client-ramping-k6.js
http_req_duration(95) 74.15ms 79.17ms
http_req_failed 0.00% 0.00%
http_req_waiting(95) 74.03ms 79.06ms

ocis:3.1.0-beta.1

## Test run 4.0.0
release 4.0.0

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-020-navigate-file-tree-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * navigate_file_tree_020: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: navigate_file_tree_020, gracefulStop: 30s
)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status

     checks.........................: 100.00% ✓ 47248     ✗ 0
     data_received..................: 90 MB   88 kB/s
     data_sent......................: 81 MB   80 kB/s
     http_req_blocked...............: avg=19.5µs  min=1.73µs  med=5.06µs  max=29.95ms p(90)=6.02µs   p(95)=6.75µs
     http_req_connecting............: avg=1.33µs  min=0s      med=0s      max=22.95ms p(90)=0s       p(95)=0s
     http_req_duration..............: avg=81.57ms min=12.73ms med=82.13ms max=1.48s   p(90)=107.06ms p(95)=117.48ms
       { expected_response:true }...: avg=81.57ms min=12.73ms med=82.13ms max=1.48s   p(90)=107.06ms p(95)=117.48ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 47248
     http_req_receiving.............: avg=75.97µs min=34.02µs med=73.89µs max=4.03ms  p(90)=86.3µs   p(95)=93.36µs
     http_req_sending...............: avg=32.4µs  min=10.93µs med=31.84µs max=2.74ms  p(90)=36.14µs  p(95)=41.47µs
     http_req_tls_handshaking.......: avg=12.77µs min=0s      med=0s      max=14.26ms p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=81.46ms min=12.63ms med=82.03ms max=1.48s   p(90)=106.96ms p(95)=117.36ms
     http_reqs......................: 47248   46.307063/s
     iteration_duration.............: avg=2.09s   min=2.01s   med=2.08s   max=4.9s    p(90)=2.11s    p(95)=2.12s
     iterations.....................: 45948   45.032952/s
     vus............................: 3       min=0       max=100
     vus_max........................: 100     min=66      max=100


running (17m00.3s), 000/100 VUs, 45948 complete and 0 interrupted iterations
navigate_file_tree_020 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-040-create-upload-rename-delete-folder-and-file-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_upload_rename_delete_folder_and_file_040: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_upload_re
name_delete_folder_and_file_040, gracefulStop: 30s)

WARN[0160] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/41cf942a-b77c-4a2c-bb91-1f732139579
1$61f1797a-1f05-4e52-8535-59c7b947ed8b/perftestuser51-pxqtwpnytr-iteration-9/large.zip\": request timeout"
WARN[0228] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/41cf942a-b77c-4a2c-bb91-1f732139579
1$a32b54c6-ef46-4134-b688-117c5174c99d/perftestuser96-kwsseiwcgp-iteration-9/large.zip\": request timeout"
WARN[0284] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/41cf942a-b77c-4a2c-bb91-1f732139579
1$4235db5a-c7ad-47fe-aed0-e39a6e4f5913/perftestuser8-vavjunalgb-iteration-20/large.zip\": request timeout"
WARN[0408] The test has generated metrics with 100031 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0533] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/41cf942a-b77c-4a2c-bb91-1f732139579
1$c004490d-06fd-475c-b5e7-806121912252/perftestuser12-kourupkngg-iteration-29/large.zip\": request timeout"
WARN[0800] The test has generated metrics with 200012 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0835] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/41cf942a-b77c-4a2c-bb91-1f732139579
1$8d73a48b-32cc-4107-b83d-5af30ef5b082/perftestuser34-skslltzgng-iteration-50/large.zip\": request timeout"

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.createResource - status
     ✗ client -> resource.uploadResource - status
      ↳  99% — ✓ 6987 / ✗ 5
     ✓ client -> resource.moveResource - status
     ✓ client -> resource.deleteResource - status

     checks.........................: 99.98% ✓ 29251     ✗ 5
     data_received..................: 18 MB  17 kB/s
     data_sent......................: 116 GB 113 MB/s
     http_req_blocked...............: avg=35.93µs  min=2.04µs  med=6.05µs  max=31.66ms p(90)=8.02µs   p(95)=8.85µs
     http_req_connecting............: avg=6.17µs   min=0s      med=0s      max=23.95ms p(90)=0s       p(95)=0s
     http_req_duration..............: avg=1.35s    min=0s      med=61.06ms max=1m0s    p(90)=511.92ms p(95)=6.91s
       { expected_response:true }...: avg=1.35s    min=12.01ms med=61.07ms max=56.49s  p(90)=508.31ms p(95)=6.91s
     http_req_failed................: 0.01%  ✓ 5         ✗ 29251
     http_req_receiving.............: avg=65.25µs  min=0s      med=59.92µs max=34.97ms p(90)=79.55µs  p(95)=92.2µs
     http_req_sending...............: avg=1.21s    min=0s      med=31.08µs max=59.91s  p(90)=238.33ms p(95)=6.46s
     http_req_tls_handshaking.......: avg=23.13µs  min=0s      med=0s      max=14.86ms p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=141.55ms min=0s      med=60.55ms max=1.74s   p(90)=388.49ms p(95)=516.65ms
     http_reqs......................: 29256  28.478549/s
     iteration_duration.............: avg=13.81s   min=8.11s   med=8.65s   max=1m8s    p(90)=20.99s   p(95)=42.78s
     iterations.....................: 6992   6.806194/s
     vus............................: 2      min=0       max=100
     vus_max........................: 100    min=64      max=100


running (17m07.3s), 000/100 VUs, 6992 complete and 0 interrupted iterations
create_upload_rename_delete... ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-050-download-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * download_050: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: download_050, gracefulStop: 30s)

WARN[0518] Request Failed                                error="request timeout"
WARN[0519] Request Failed                                error="request timeout"
WARN[0534] Request Failed                                error="request timeout"
WARN[0538] Request Failed                                error="request timeout"
WARN[0543] Request Failed                                error="request timeout"
WARN[0549] Request Failed                                error="request timeout"
WARN[0776] Request Failed                                error="request timeout"

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✗ client -> resource.downloadResource - status
      ↳  98% — ✓ 6946 / ✗ 100

     checks.........................: 98.79% ✓ 8222     ✗ 100
     data_received..................: 115 GB 112 MB/s
     data_sent......................: 12 MB  12 kB/s
     http_req_blocked...............: avg=129.52µs min=1.86µs  med=5.75µs   max=26.05ms  p(90)=7.5µs   p(95)=8.31µs
     http_req_connecting............: avg=27.11µs  min=0s      med=0s       max=12.96ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=3.13s    min=16.13ms med=394.1ms  max=1m0s     p(90)=8.33s   p(95)=15.46s
       { expected_response:true }...: avg=3.11s    min=16.13ms med=399.24ms max=58.01s   p(90)=8.35s   p(95)=15.42s
     http_req_failed................: 1.20%  ✓ 100      ✗ 8222
     http_req_receiving.............: avg=3.06s    min=43.41µs med=334.46ms max=59.97s   p(90)=8.28s   p(95)=15.43s
     http_req_sending...............: avg=29.83µs  min=12.63µs med=28.81µs  max=155.09µs p(90)=37.39µs p(95)=41.49µs
     http_req_tls_handshaking.......: avg=95.3µs   min=0s      med=0s       max=17.69ms  p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=61.04ms  min=15.98ms med=44.11ms  max=1.33s    p(90)=89.03ms p(95)=177.81ms
     http_reqs......................: 8322   8.106245/s
     iteration_duration.............: avg=13.71s   min=10.02s  med=10.51s   max=1m10s    p(90)=19.61s  p(95)=28.58s
     iterations.....................: 7046   6.863326/s
     vus............................: 2      min=0      max=100
     vus_max........................: 100    min=99     max=100


running (17m06.6s), 000/100 VUs, 7046 complete and 0 interrupted iterations
download_050 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-070-user-group-search-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * user_group_search_070: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: user_group_search_070, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> search.searchForSharees - status

     checks.........................: 100.00% ✓ 20246    ✗ 0
     data_received..................: 29 MB   28 kB/s
     data_sent......................: 31 MB   30 kB/s
     http_req_blocked...............: avg=41.88µs min=2.42µs  med=5.17µs  max=29.93ms  p(90)=5.95µs  p(95)=6.58µs
     http_req_connecting............: avg=3.2µs   min=0s      med=0s      max=22.81ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=47.93ms min=15.53ms med=43.58ms max=264.39ms p(90)=59.76ms p(95)=72.68ms
       { expected_response:true }...: avg=47.93ms min=15.53ms med=43.58ms max=264.39ms p(90)=59.76ms p(95)=72.68ms
     http_req_failed................: 0.00%   ✓ 0        ✗ 20246
     http_req_receiving.............: avg=57.73µs min=30.26µs med=55.73µs max=541.18µs p(90)=67.23µs p(95)=77.11µs
     http_req_sending...............: avg=28.65µs min=17.58µs med=27.97µs max=127.02µs p(90)=31.45µs p(95)=34.32µs
     http_req_tls_handshaking.......: avg=32.95µs min=0s      med=0s      max=8.01ms   p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=47.84ms min=15.41ms med=43.49ms max=264.29ms p(90)=59.67ms p(95)=72.59ms
     http_reqs......................: 20246   19.80125/s
     iteration_duration.............: avg=5.05s   min=5.01s   med=5.04s   max=5.51s    p(90)=5.05s   p(95)=5.06s
     iterations.....................: 19046   18.62761/s
     vus............................: 2       min=0      max=100
     vus_max........................: 100     min=64     max=100


running (17m02.5s), 000/100 VUs, 19046 complete and 0 interrupted iterations
user_group_search_070 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-080-create-space-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_space_080: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)

WARN[0824] The test has generated metrics with 100002 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> application.createDrive - status
     ✓ client -> drive.deactivateDrive - status
     ✓ client -> drive.deleteDrive - status

     checks.........................: 100.00% ✓ 41679     ✗ 0
     data_received..................: 27 MB   27 kB/s
     data_sent......................: 62 MB   61 kB/s
     http_req_blocked...............: avg=23.13µs min=2.02µs  med=5.5µs   max=20.24ms  p(90)=6.39µs  p(95)=6.84µs
     http_req_connecting............: avg=1.28µs  min=0s      med=0s      max=13.18ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=43.61ms min=12.47ms med=39.26ms max=279.17ms p(90)=62.97ms p(95)=74.11ms
       { expected_response:true }...: avg=43.61ms min=12.47ms med=39.26ms max=279.17ms p(90)=62.97ms p(95)=74.11ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 41679
     http_req_receiving.............: avg=52.92µs min=24.35µs med=49.9µs  max=353.41µs p(90)=65.88µs p(95)=71.74µs
     http_req_sending...............: avg=29.04µs min=16.98µs med=28.07µs max=205.18µs p(90)=32.07µs p(95)=35.24µs
     http_req_tls_handshaking.......: avg=16.08µs min=0s      med=0s      max=10.58ms  p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=43.53ms min=12.39ms med=39.18ms max=279.06ms p(90)=62.88ms p(95)=74ms
     http_reqs......................: 41679   40.700998/s
     iteration_duration.............: avg=7.14s   min=7.05s   med=7.12s   max=7.62s    p(90)=7.16s   p(95)=7.19s
     iterations.....................: 13493   13.176385/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=67      max=100

running (17m04.0s), 000/100 VUs, 13493 complete and 0 interrupted iterations
create_space_080 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-090-create-remove-group-share-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_remove_group_share_090: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gra
cefulStop: 30s)

WARN[0292] The test has generated metrics with 100014 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0554] The test has generated metrics with 200067 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.createResource - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> share.createShare - status
     ✓ client -> share.deleteShare - status
     ✓ client -> resource.deleteResource - status

     checks.........................: 100.00% ✓ 52425     ✗ 0
     data_received..................: 59 MB   58 kB/s
     data_sent......................: 84 MB   82 kB/s
     http_req_blocked...............: avg=18.48µs min=2.05µs  med=5.53µs  max=29.32ms  p(90)=6.55µs   p(95)=7.02µs
     http_req_connecting............: avg=1.11µs  min=0s      med=0s      max=22.38ms  p(90)=0s       p(95)=0s
     http_req_duration..............: avg=73.75ms min=12.74ms med=63.12ms max=455.21ms p(90)=129.16ms p(95)=157.05ms
       { expected_response:true }...: avg=73.75ms min=12.74ms med=63.12ms max=455.21ms p(90)=129.16ms p(95)=157.05ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 52425
     http_req_receiving.............: avg=60.52µs min=27.19µs med=57.48µs max=3.86ms   p(90)=76.43µs  p(95)=83.84µs
     http_req_sending...............: avg=29.3µs  min=15.31µs med=28.41µs max=195.71µs p(90)=33.94µs  p(95)=36.45µs
     http_req_tls_handshaking.......: avg=11.57µs min=0s      med=0s      max=7.93ms   p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=73.66ms min=12.66ms med=63.03ms max=455.12ms p(90)=129.08ms p(95)=156.96ms
     http_reqs......................: 52425   50.967644/s
     iteration_duration.............: avg=9.43s   min=9.2s    med=9.39s   max=10.84s   p(90)=9.56s    p(95)=9.73s
     iterations.....................: 10225   9.940757/s
     vus............................: 2       min=0       max=100
     vus_max........................: 100     min=100     max=100


running (17m08.6s), 000/100 VUs, 10225 complete and 0 interrupted iterations
create_remove_group_share_090 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-100-add-remove-tag-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * add_remove_tag_100: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: add_remove_tag_100, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> tag.getTags - status -- (SKIPPED)
     ✓ client -> tag.createTag - status -- (SKIPPED)
     ✓ client -> tag.addTagToResource - status
     ✗ client -> tag.removeTagToResource - status
      ↳  99% — ✓ 11782 / ✗ 9

     checks.........................: 99.98% ✓ 60246     ✗ 9
     data_received..................: 32 MB  31 kB/s
     data_sent......................: 61 MB  59 kB/s
     http_req_blocked...............: avg=20.93µs min=2.21µs  med=5.55µs  max=22.94ms p(90)=6.67µs  p(95)=7.09µs
     http_req_connecting............: avg=1.44µs  min=0s      med=0s      max=15.93ms p(90)=0s      p(95)=0s
     http_req_duration..............: avg=45.39ms min=12.2ms  med=39.96ms max=1.63s   p(90)=59.56ms p(95)=70ms
       { expected_response:true }...: avg=45.39ms min=12.2ms  med=39.96ms max=1.63s   p(90)=59.57ms p(95)=70ms
     http_req_failed................: 0.02%  ✓ 9         ✗ 36664
     http_req_receiving.............: avg=58.64µs min=20.05µs med=52.38µs max=1.99ms  p(90)=79.69µs p(95)=85.34µs
     http_req_sending...............: avg=31.23µs min=17.57µs med=30.7µs  max=4.49ms  p(90)=34.39µs p(95)=37.22µs
     http_req_tls_handshaking.......: avg=13.6µs  min=0s      med=0s      max=10.34ms p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=45.3ms  min=12.11ms med=39.87ms max=1.63s   p(90)=59.48ms p(95)=69.93ms
     http_reqs......................: 36673  35.799724/s
     iteration_duration.............: avg=8.17s   min=8.08s   med=8.14s   max=11.1s   p(90)=8.18s   p(95)=8.2s
     iterations.....................: 11791  11.510227/s
     vus............................: 3      min=0       max=100
     vus_max........................: 100    min=100     max=100


running (17m04.4s), 000/100 VUs, 11791 complete and 0 interrupted iterations
add_remove_tag_100 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-110-sync-client-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * sync_client_110: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: sync_client_110, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status

     checks.........................: 100.00% ✓ 25959     ✗ 0
     data_received..................: 52 MB   51 kB/s
     data_sent......................: 44 MB   43 kB/s
     http_req_blocked...............: avg=28.47µs min=2.11µs  med=4.94µs  max=17.59ms p(90)=5.85µs  p(95)=6.41µs
     http_req_connecting............: avg=2.08µs  min=0s      med=0s      max=13.55ms p(90)=0s      p(95)=0s
     http_req_duration..............: avg=45.28ms min=12.15ms med=37.17ms max=1.67s   p(90)=62.59ms p(95)=74.15ms
       { expected_response:true }...: avg=45.28ms min=12.15ms med=37.17ms max=1.67s   p(90)=62.59ms p(95)=74.15ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 25959
     http_req_receiving.............: avg=75.46µs min=35.78µs med=73.11µs max=1.21ms  p(90)=85.78µs p(95)=93.31µs
     http_req_sending...............: avg=31.66µs min=15.15µs med=31.21µs max=680.5µs p(90)=35.31µs p(95)=39.76µs
     http_req_tls_handshaking.......: avg=21.23µs min=0s      med=0s      max=13.84ms p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=45.17ms min=12.04ms med=37.06ms max=1.67s   p(90)=62.48ms p(95)=74.03ms
     http_reqs......................: 25959   25.412981/s
     iteration_duration.............: avg=5.07s   min=5.01s   med=5.05s   max=8.15s   p(90)=5.12s   p(95)=5.15s
     iterations.....................: 18968   18.569029/s
     vus............................: 4       min=0       max=100
     vus_max........................: 100     min=64      max=100


running (17m01.5s), 000/100 VUs, 18968 complete and 0 interrupted iterations
sync_client_110 ✓ [======================================] 000/100 VUs  17m0s
Screenshot 2023-12-19 at 21 12 41
## Test run 5.0.0-rc.1 with redis cache

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-020-navigate-file-tree-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * navigate_file_tree_020: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: navigate_file_tree_020, gracefulStop: 30s
)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status

     checks.........................: 100.00% ✓ 47331     ✗ 0
     data_received..................: 89 MB   87 kB/s
     data_sent......................: 82 MB   80 kB/s
     http_req_blocked...............: avg=20µs    min=2.11µs  med=4.85µs  max=27.96ms  p(90)=5.7µs    p(95)=6.46µs
     http_req_connecting............: avg=1.3µs   min=0s      med=0s      max=20.86ms  p(90)=0s       p(95)=0s
     http_req_duration..............: avg=77.8ms  min=15.13ms med=77.48ms max=776.65ms p(90)=112.19ms p(95)=122.8ms
       { expected_response:true }...: avg=77.8ms  min=15.13ms med=77.48ms max=776.65ms p(90)=112.19ms p(95)=122.8ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 47331
     http_req_receiving.............: avg=75.23µs min=36.27µs med=73.04µs max=4.35ms   p(90)=84.8µs   p(95)=91.26µs
     http_req_sending...............: avg=31.91µs min=16.16µs med=31.39µs max=2.47ms   p(90)=35.22µs  p(95)=39.91µs
     http_req_tls_handshaking.......: avg=13.53µs min=0s      med=0s      max=7.83ms   p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=77.69ms min=15.04ms med=77.38ms max=776.26ms p(90)=112.07ms p(95)=122.7ms
     http_reqs......................: 47331   46.325992/s
     iteration_duration.............: avg=2.08s   min=2.02s   med=2.08s   max=4.17s    p(90)=2.11s    p(95)=2.12s
     iterations.....................: 46031   45.053595/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=64      max=100


running (17m01.7s), 000/100 VUs, 46031 complete and 0 interrupted iterations
navigate_file_tree_020 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-040-create-upload-rename-delete-folder-and-file-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_upload_rename_delete_folder_and_file_040: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_upload_re
name_delete_folder_and_file_040, gracefulStop: 30s)

WARN[0408] The test has generated metrics with 100031 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0686] Request Failed                                error="Put \"https://49.12.163.166:9200/remote.php/dav/spaces/416545a2-1406-48a9-b8c4-3b48ce09506
b$f71ae9e8-a317-4988-9be0-ff0507f752fb/perftestuser78-ovoxgyapcx-iteration-39/large.zip\": request timeout"
WARN[0800] The test has generated metrics with 200021 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.createResource - status
     ✗ client -> resource.uploadResource - status
      ↳  99% — ✓ 6989 / ✗ 1
     ✓ client -> resource.moveResource - status
     ✓ client -> resource.deleteResource - status

     checks.........................: 99.99% ✓ 29247     ✗ 1
     data_received..................: 18 MB  17 kB/s
     data_sent......................: 116 GB 113 MB/s
     http_req_blocked...............: avg=32.28µs  min=2.08µs med=6.06µs  max=15.59ms p(90)=7.97µs   p(95)=8.82µs
     http_req_connecting............: avg=5.06µs   min=0s     med=0s      max=10.55ms p(90)=0s       p(95)=0s
     http_req_duration..............: avg=1.36s    min=0s     med=53.56ms max=59.74s  p(90)=498.47ms p(95)=6.91s
       { expected_response:true }...: avg=1.36s    min=16ms   med=53.56ms max=59.74s  p(90)=498.49ms p(95)=6.91s
     http_req_failed................: 0.00%  ✓ 1         ✗ 29247
     http_req_receiving.............: avg=63.96µs  min=0s     med=60.18µs max=3.75ms  p(90)=80.5µs   p(95)=92.6µs
     http_req_sending...............: avg=1.22s    min=0s     med=31.32µs max=57.89s  p(90)=245.26ms p(95)=6.49s
     http_req_tls_handshaking.......: avg=20.64µs  min=0s     med=0s      max=10ms    p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=137.88ms min=0s     med=53.4ms  max=2.3s    p(90)=386.62ms p(95)=531.58ms
     http_reqs......................: 29248  28.482869/s
     iteration_duration.............: avg=13.81s   min=8.15s  med=8.63s   max=1m9s    p(90)=21.27s   p(95)=42.85s
     iterations.....................: 6988   6.805193/s
     vus............................: 1      min=0       max=100
     vus_max........................: 100    min=64      max=100


running (17m06.9s), 000/100 VUs, 6988 complete and 2 interrupted iterations
create_upload_rename_delete... ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-050-download-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * download_050: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: download_050, gracefulStop: 30s)

WARN[0419] Request Failed                                error="request timeout"
WARN[0425] Request Failed                                error="request timeout"

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✗ client -> resource.downloadResource - status
      ↳  99% — ✓ 6949 / ✗ 2

     checks.........................: 99.97% ✓ 8207     ✗ 2
     data_received..................: 116 GB 112 MB/s
     data_sent......................: 12 MB  12 kB/s
     http_req_blocked...............: avg=135.31µs min=2.1µs   med=5.78µs   max=26.84ms  p(90)=7.49µs   p(95)=8.19µs
     http_req_connecting............: avg=26.56µs  min=0s      med=0s       max=22.36ms  p(90)=0s       p(95)=0s
     http_req_duration..............: avg=3.28s    min=16.27ms med=399.91ms max=1m0s     p(90)=8.56s    p(95)=16.58s
       { expected_response:true }...: avg=3.26s    min=16.27ms med=399.81ms max=58.08s   p(90)=8.56s    p(95)=16.47s
     http_req_failed................: 0.02%  ✓ 2        ✗ 8207
     http_req_receiving.............: avg=3.22s    min=56.21µs med=348.86ms max=59.97s   p(90)=8.52s    p(95)=16.52s
     http_req_sending...............: avg=30.19µs  min=14.24µs med=28.97µs  max=217.52µs p(90)=38.19µs  p(95)=43.19µs
     http_req_tls_handshaking.......: avg=101.64µs min=0s      med=0s       max=18.82ms  p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=59.51ms  min=16.01ms med=46.84ms  max=703.55ms p(90)=106.24ms p(95)=163.66ms
     http_reqs......................: 8209   7.978412/s
     iteration_duration.............: avg=13.89s   min=10.03s  med=10.52s   max=1m10s    p(90)=19.5s    p(95)=30.93s
     iterations.....................: 6951   6.755748/s
     vus............................: 1      min=0      max=100
     vus_max........................: 100    min=77     max=100


running (17m08.9s), 000/100 VUs, 6951 complete and 0 interrupted iterations
download_050 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-070-user-group-search-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * user_group_search_070: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: user_group_search_070, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> search.searchForSharees - status

     checks.........................: 100.00% ✓ 20245     ✗ 0
     data_received..................: 29 MB   28 kB/s
     data_sent......................: 31 MB   30 kB/s
     http_req_blocked...............: avg=43.75µs min=2.39µs  med=5.35µs  max=31.58ms  p(90)=6.19µs  p(95)=6.73µs
     http_req_connecting............: avg=3.18µs  min=0s      med=0s      max=24.44ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=48.5ms  min=15.73ms med=42.85ms max=279.13ms p(90)=61.69ms p(95)=85.01ms
       { expected_response:true }...: avg=48.5ms  min=15.73ms med=42.85ms max=279.13ms p(90)=61.69ms p(95)=85.01ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 20245
     http_req_receiving.............: avg=56.98µs min=30.23µs med=54.53µs max=451.63µs p(90)=67.15µs p(95)=76.43µs
     http_req_sending...............: avg=29.15µs min=15.3µs  med=28.34µs max=111.35µs p(90)=32.82µs p(95)=34.99µs
     http_req_tls_handshaking.......: avg=34.78µs min=0s      med=0s      max=16.05ms  p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=48.41ms min=15.6ms  med=42.76ms max=279.03ms p(90)=61.61ms p(95)=84.93ms
     http_reqs......................: 20245   19.785414/s
     iteration_duration.............: avg=5.05s   min=5.01s   med=5.04s   max=5.58s    p(90)=5.06s   p(95)=5.07s
     iterations.....................: 19045   18.612656/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=85      max=100


running (17m03.2s), 000/100 VUs, 19045 complete and 0 interrupted iterations
user_group_search_070 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-080-create-space-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_space_080: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✗ client -> application.createDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180
     ✗ client -> drive.deactivateDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180
     ✗ client -> drive.deleteDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180

     checks.........................: 19.55% ✓ 8151      ✗ 33540
     data_received..................: 25 MB  24 kB/s
     data_sent......................: 61 MB  59 kB/s
     http_req_blocked...............: avg=23.46µs min=2.15µs  med=5.48µs  max=29.69ms  p(90)=6.44µs  p(95)=6.91µs
     http_req_connecting............: avg=1.51µs  min=0s      med=0s      max=22.68ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=42.85ms min=10.9ms  med=38.14ms max=312.48ms p(90)=62.16ms p(95)=75.27ms
       { expected_response:true }...: avg=53.97ms min=15.92ms med=43.95ms max=297.33ms p(90)=75.54ms p(95)=152.47ms
     http_req_failed................: 80.44% ✓ 33540     ✗ 8151
     http_req_receiving.............: avg=60.22µs min=26.74µs med=59.58µs max=392.91µs p(90)=69.49µs p(95)=75.94µs
     http_req_sending...............: avg=28.71µs min=14.06µs med=27.52µs max=166.8µs  p(90)=31.99µs p(95)=36.1µs
     http_req_tls_handshaking.......: avg=16.21µs min=0s      med=0s      max=7.21ms   p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=42.76ms min=10.81ms med=38.05ms max=312.39ms p(90)=62.08ms p(95)=75.17ms
     http_reqs......................: 41691  40.665674/s
     iteration_duration.............: avg=7.13s   min=7.05s   med=7.12s   max=7.64s    p(90)=7.18s   p(95)=7.21s
     iterations.....................: 13497  13.165062/s
     vus............................: 1      min=0       max=100
     vus_max........................: 100    min=64      max=100


running (17m05.2s), 000/100 VUs, 13497 complete and 0 interrupted iterations
create_space_080 ✓ [======================================] 000/100 VUs  17m0s


          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-090-create-remove-group-share-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_remove_group_share_090: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gra
cefulStop: 30s)

WARN[0294] The test has generated metrics with 100005 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0590] The test has generated metrics with 200004 unique time series, which is higher than the suggested limit of 100000 and could cause high memory u
sage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. 
See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.createResource - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> share.createShare - status
     ✓ client -> share.deleteShare - status
     ✓ client -> resource.deleteResource - status

     checks.........................: 100.00% ✓ 40565     ✗ 0
     data_received..................: 46 MB   45 kB/s
     data_sent......................: 65 MB   63 kB/s
     http_req_blocked...............: avg=22.36µs  min=2.18µs  med=5.6µs    max=20.61ms  p(90)=6.62µs  p(95)=7.05µs
     http_req_connecting............: avg=1.25µs   min=0s      med=0s       max=13.6ms   p(90)=0s      p(95)=0s
     http_req_duration..............: avg=629.4ms  min=13.58ms med=111.11ms max=7.37s    p(90)=1.88s   p(95)=3.9s
       { expected_response:true }...: avg=629.4ms  min=13.58ms med=111.11ms max=7.37s    p(90)=1.88s   p(95)=3.9s
     http_req_failed................: 0.00%   ✓ 0         ✗ 40565
     http_req_receiving.............: avg=60.37µs  min=24.4µs  med=57.84µs  max=590.69µs p(90)=76.06µs p(95)=82.83µs
     http_req_sending...............: avg=29.44µs  min=15.95µs med=28.57µs  max=196.84µs p(90)=34.03µs p(95)=36.15µs
     http_req_tls_handshaking.......: avg=15.22µs  min=0s      med=0s       max=7.37ms   p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=629.31ms min=13.48ms med=111.03ms max=7.37s    p(90)=1.88s   p(95)=3.9s
     http_reqs......................: 40565   39.567574/s
     iteration_duration.............: avg=12.31s   min=9.24s   med=9.83s    max=22.24s   p(90)=18.52s  p(95)=19.38s
     iterations.....................: 7853    7.659908/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=85      max=100


running (17m05.2s), 000/100 VUs, 7853 complete and 0 interrupted iterations
create_remove_group_share_090 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

^[^[[B^[[B^[[B^[[B^[[B^[[A^[[A  execution: local
     script: packages/k6-tests/artifacts/koko-platform-100-add-remove-tag-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * add_remove_tag_100: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: add_remove_tag_100, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> tag.getTags - status -- (SKIPPED)
     ✓ client -> tag.createTag - status -- (SKIPPED)
     ✓ client -> tag.addTagToResource - status
     ✗ client -> tag.removeTagToResource - status
      ↳  99% — ✓ 11736 / ✗ 7

     checks.........................: 99.98% ✓ 60008     ✗ 7
     data_received..................: 30 MB  30 kB/s
     data_sent......................: 61 MB  59 kB/s
     http_req_blocked...............: avg=24.72µs min=2.11µs  med=5.58µs  max=33.72ms  p(90)=6.69µs  p(95)=7.12µs
     http_req_connecting............: avg=1.73µs  min=0s      med=0s      max=26.77ms  p(90)=0s      p(95)=0s
     http_req_duration..............: avg=56.25ms min=15.06ms med=51.58ms max=819.67ms p(90)=81.05ms p(95)=97.08ms
       { expected_response:true }...: avg=56.25ms min=15.06ms med=51.59ms max=819.67ms p(90)=81.05ms p(95)=97.08ms
     http_req_failed................: 0.01%  ✓ 7         ✗ 36522
     http_req_receiving.............: avg=58.7µs  min=20.95µs med=52.26µs max=1.71ms   p(90)=80.4µs  p(95)=86.56µs
     http_req_sending...............: avg=31.37µs min=15.26µs med=30.9µs  max=282.48µs p(90)=34.38µs p(95)=37.07µs
     http_req_tls_handshaking.......: avg=17.08µs min=0s      med=0s      max=13.39ms  p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=56.16ms min=14.95ms med=51.5ms  max=819.39ms p(90)=80.96ms p(95)=97ms
     http_reqs......................: 36529  35.555589/s
     iteration_duration.............: avg=8.2s    min=8.11s   med=8.18s   max=10.38s   p(90)=8.24s   p(95)=8.27s
     iterations.....................: 11743  11.430077/s
     vus............................: 1      min=0       max=100
     vus_max........................: 100    min=64      max=100


running (17m07.4s), 000/100 VUs, 11743 complete and 0 interrupted iterations
add_remove_tag_100 ✓ [======================================] 000/100 VUs  17m0s

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-110-sync-client-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * sync_client_110: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: sync_client_110, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status

     checks.........................: 100.00% ✓ 25934     ✗ 0
     data_received..................: 50 MB   49 kB/s
     data_sent......................: 44 MB   43 kB/s
     http_req_blocked...............: avg=33.19µs min=2.18µs  med=4.87µs  max=20.66ms  p(90)=5.72µs  p(95)=6.26µs
     http_req_connecting............: avg=2.17µs  min=0s      med=0s      max=13.4ms   p(90)=0s      p(95)=0s
     http_req_duration..............: avg=48.71ms min=14.33ms med=43.14ms max=739.93ms p(90)=68.91ms p(95)=79.17ms
       { expected_response:true }...: avg=48.71ms min=14.33ms med=43.14ms max=739.93ms p(90)=68.91ms p(95)=79.17ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 25934
     http_req_receiving.............: avg=76.42µs min=38.79µs med=74.15µs max=1.97ms   p(90)=86.55µs p(95)=94.04µs
     http_req_sending...............: avg=31.68µs min=17.21µs med=31.37µs max=575.5µs  p(90)=34.75µs p(95)=37.83µs
     http_req_tls_handshaking.......: avg=25.92µs min=0s      med=0s      max=7.21ms   p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=48.6ms  min=14.23ms med=43.04ms max=739.64ms p(90)=68.81ms p(95)=79.06ms
     http_reqs......................: 25934   25.349739/s
     iteration_duration.............: avg=5.07s   min=5.02s   med=5.05s   max=7.27s    p(90)=5.15s   p(95)=5.17s
     iterations.....................: 18949   18.522102/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=64      max=100


running (17m03.0s), 000/100 VUs, 18949 complete and 0 interrupted iterations
sync_client_110 ✓ [======================================] 000/100 VUs  17m0s

image
## Test run 5.0.0-rc.2 with redis cache. re-test only 080 and 090
          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-080-create-space-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_space_080: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)

WARN[0829] The test has generated metrics with 100002 unique time series, which is higher than the suggested limit of 100000 and could cause high memory usage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> application.createDrive - status
     ✓ client -> drive.deactivateDrive - status
     ✓ client -> drive.deleteDrive - status

     checks.........................: 100.00% ✓ 41502     ✗ 0    
     data_received..................: 27 MB   27 kB/s
     data_sent......................: 62 MB   60 kB/s
     http_req_blocked...............: avg=22.48µs min=2.18µs  med=5.62µs  max=20.5ms   p(90)=6.55µs  p(95)=7.04µs 
     http_req_connecting............: avg=1.28µs  min=0s      med=0s      max=13.64ms  p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=54.16ms min=15.83ms med=50.91ms max=207.22ms p(90)=77.16ms p(95)=88.22ms
       { expected_response:true }...: avg=54.16ms min=15.83ms med=50.91ms max=207.22ms p(90)=77.16ms p(95)=88.22ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 41502
     http_req_receiving.............: avg=53.01µs min=22.48µs med=50.03µs max=1.39ms   p(90)=66.11µs p(95)=72.02µs
     http_req_sending...............: avg=28.98µs min=15.43µs med=28.08µs max=318.63µs p(90)=32.13µs p(95)=35.44µs
     http_req_tls_handshaking.......: avg=15.33µs min=0s      med=0s      max=10.05ms  p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=54.08ms min=15.73ms med=50.83ms max=207.13ms p(90)=77.08ms p(95)=88.14ms
     http_reqs......................: 41502   40.428895/s
     iteration_duration.............: avg=7.17s   min=7.09s   med=7.16s   max=7.55s    p(90)=7.21s   p(95)=7.23s  
     iterations.....................: 13434   13.086641/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=64      max=100


running (17m06.5s), 000/100 VUs, 13434 complete and 0 interrupted iterations
create_space_080 ✓ [======================================] 000/100 VUs  17m0s


          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-090-create-remove-group-share-ramping-k6.js
     output: InfluxDBv2 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 100 max VUs, 17m30s max duration (incl. graceful stop):
           * create_remove_group_share_090: Up to 100 looping VUs for 17m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gracefulStop: 30s)

WARN[0292] The test has generated metrics with 100014 unique time series, which is higher than the suggested limit of 100000 and could cause high memory usage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester
WARN[0552] The test has generated metrics with 200022 unique time series, which is higher than the suggested limit of 100000 and could cause high memory usage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.createResource - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> share.createShare - status
     ✓ client -> share.deleteShare - status
     ✓ client -> resource.deleteResource - status

     checks.........................: 100.00% ✓ 52730     ✗ 0    
     data_received..................: 60 MB   59 kB/s
     data_sent......................: 85 MB   82 kB/s
     http_req_blocked...............: avg=17.63µs min=2.12µs  med=5.61µs  max=22.86ms  p(90)=6.59µs   p(95)=7.01µs  
     http_req_connecting............: avg=995ns   min=0s      med=0s      max=15.83ms  p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=62.36ms min=13.7ms  med=42ms    max=249.25ms p(90)=115.14ms p(95)=125.76ms
       { expected_response:true }...: avg=62.36ms min=13.7ms  med=42ms    max=249.25ms p(90)=115.14ms p(95)=125.76ms
     http_req_failed................: 0.00%   ✓ 0         ✗ 52730
     http_req_receiving.............: avg=60.23µs min=24.36µs med=57.17µs max=12.19ms  p(90)=75.95µs  p(95)=82.97µs 
     http_req_sending...............: avg=29.23µs min=14.62µs med=28.35µs max=249.42µs p(90)=33.77µs  p(95)=35.82µs 
     http_req_tls_handshaking.......: avg=10.79µs min=0s      med=0s      max=8.86ms   p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=62.27ms min=13.61ms med=41.91ms max=249.17ms p(90)=115.05ms p(95)=125.66ms
     http_reqs......................: 52730   51.440443/s
     iteration_duration.............: avg=9.37s   min=9.26s   med=9.35s   max=10.77s   p(90)=9.41s    p(95)=9.45s   
     iterations.....................: 10286   10.034447/s
     vus............................: 1       min=0       max=100
     vus_max........................: 100     min=98      max=100


running (17m05.1s), 000/100 VUs, 10286 complete and 0 interrupted iterations
create_remove_group_share_090 ✓ [======================================] 000/100 VUs  17m0s
## History

how to run ocis 5.0.0: OCIS_CACHE_STORE=redis OCIS_CACHE_STORE_NODES=localhost:6379 OCIS_INSECURE=true PROXY_ENABLE_BASIC_AUTH=true ocis/bin/ocis server

Result:

  • ocis 5.0.0 using redis cash is slower than the same tests with ocis 4.0.0
  • using redis cash leads to a large number of errors in the create-space test
 ✗ client -> application.createDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180
     ✗ client -> drive.deactivateDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180
     ✗ client -> drive.deleteDrive - status
      ↳  17% — ✓ 2317 / ✗ 11180
  • req_duration for koko-platform-070-user-group-search-ramping-k6.js increased from 157.05ms to 3.9s
  • re-run 080 and 090 tests agains 5.0.0-rc2. no regression. see result in table

@micbar
Copy link
Contributor

micbar commented Dec 20, 2023

5.0.0-beta.2

Changes since 5.0.0-beta.1

oCIS

  • Bugfix - Fix natsjs cache: #7793
  • Bugfix - Do not purge expired upload sessions that are still postprocessing: #7859
  • Bugfix - Password policy return code was wrong: #7952
  • Bugfix - Update permission validation: #7963
  • Bugfix - Renaming a user to a string with capital letters: #7964
  • Bugfix - Improve OCM support: #7973
  • Bugfix - Permissions of a role with duplicate ID: #7976
  • Bugfix - Non durable streams for sse service: #7986
  • Bugfix - Fix empty trace ids: #8023
  • Change - Remove accessDeniedHelpUrl from the config: #7970
  • Enhancement - Bump reva: #7793
  • Enhancement - Add cli commands for trash-bin: #7917
  • Enhancement - Update web to v8.0.0-beta.2: #7952
  • Enhancement - Add validation update public share: #7978
  • Enhancement - Allow inmemory nats-js-kv stores: #7979
  • Enhancement - Use kv store in natsjs registry: #7987
  • Enhancement - Allow authentication nats connections: #7989
  • Enhancement - Add ocm and sciencemesh services: #7998
  • Enhancement - Make nats-js-kv the default registry: #8011

Web

Reva

@jesmrec
Copy link

jesmrec commented Dec 21, 2023

Mobile

1. Changelog

Depth infinity

iOS: iOS does the depth infinity check correctly, discovering the whole structure (checked over a deep structure) ✅
Android: No depth infinity in Android yet ➖

Do not allow moves between shares

iOS: iOS app does not allow it, showing the following error: ✅

iOS - move prevented in shares

Room for improvement: operation could be prevented to items in the share list (create issue)

Android: Move operation is not allowed in oCIS' shares for Android yet ➖

Fix spaceID in meta endpoint response

iOS: Fixed, available 12.1+ ✅
Android: Fixed, available 4.2+ ✅

Add url extension to mime type list

iOS: .url files not detected as straigth-openable. Needs Open In
Android: .url files not detected as straigth-openable. Needs Open In

Service Accounts

Affects the automatic acceptance of shares. Such feature tested in both mobile clients

iOS: ✅
Android: ✅ (only list)

2. Regression tests

2.1 Android

Android version: current stable 4.1.1

Regression test: https://github.com/owncloud/QA/blob/master/Mobile/Backend/oCIS/Android%20vs%205.0.0.md
Auto tests: https://reports.cucumber.io/reports/3ea6e8b5-b71b-4586-80b5-f04f7afadc58

NOTE: only tests cases with backend involved have been executed. Tests cases that only involves mobile client have been skipped for this execution

2.2 iOS

iOS version: incoming 12.1

Regression test: https://github.com/owncloud/QA/blob/master/Mobile/Backend/oCIS/iOS%20vs%205.0.0.md
Auto tests: https://reports.cucumber.io/reports/e14bef9f-bca4-4e85-b561-7e4b3e5fb6ff

NOTE: As iOS app is going to release very soon, the incoming 12.1 was used to test the new server version. For that reason, not all the test cases or reports are relevant for oCIS releases, but for iOS release.

3. Issues

/meta endpoint is returning a 500 for shares: That breaks deeplinks (Android) and magiclinks (iOS) over shared items. Could be blocking and affecting mobile clients

POST request to create link with incorrect password returns 400 instead of 403: 400 (even 403) is not the correct response to an "incorrect password" error.

(Improvement) Support in mobile apps to .docxf files (only in client side)

owncloud/android#4267
owncloud/ios-app#1310

4. Conclusions

  • Changelog items in oCIS 5.0.0 break nothing in mobile clients
  • Regression tests are OK in terms of integration with backend (minor issues in client side)
  • Issues reported above affect some scenarios in clients.
    • Issue about response code because password does not match policy is not a blocker, and may be fixed within sharing NG. Android client will fix the problem by taking care of the privacy-policy capabilities
    • Issue with /meta endpoint does not allow to open links that points to shared files/folders. I don't know how relevant is such scenario fue users, anyway 500 errors are always bad to receive. Should be fixed.

@saw-jan
Copy link
Member

saw-jan commented Dec 21, 2023

Compare with “read only” target share.

(read-only share) Same behavior both in oc10 and ocis.

  • oc10: file cannot be moved. Moved file is reverted to the original location in the explorer
  • ocis-5.0.0-beta.2: file cannot be moved. Moved file is reverted to the original location in the explorer

Since ocis doesn't allow move between shares, I wonder if the follwoing behavior is correct (I am expecting that it would behave same like with read-only share - file is reverted to it's original location):

ocis-5.0.0-beta.1: move is not possible (file is not synced - blacklisted) (#7940 (comment))

CC @michaelstingl

@ScharfViktor
Copy link
Contributor Author

e2e tests 5.0.0 beta-2 against wopi:

Screenshot 2023-12-21 at 22 36 02

note: failed test after re-run is green

@ScharfViktor
Copy link
Contributor Author

e2e tests 5.0.0 beta-2 against traefik:

image note: failed test after re-run is green

@micbar
Copy link
Contributor

micbar commented Feb 7, 2024

Ocis 5.0.0-rc.4

Incremental changes since 5.0.0-rc.3

Web

  • No changes

Ocis

  • Bugfix - Remove invalid environment variables: #8303
  • Bugfix - Fix concurrent shares config: #8317
  • Bugfix - Signed url verification: #8385
  • Enhancement - Support login page background configuration: #7674
  • Enhancement - Modify the concurrency default: #8309

Reva

  • No changes

@micbar
Copy link
Contributor

micbar commented Feb 7, 2024

@ScharfViktor @TheOneRing @individual-it RC 4 is not the last RC.

@prohtex
Copy link

prohtex commented Feb 11, 2024

e2e tests 5.0.0 beta-2 against s3:

image

Perhaps a test should be added for large file upload.

@ScharfViktor
Copy link
Contributor Author

Perhaps a test should be added for large file upload.

There is such a test, and it passed. https://github.com/owncloud/web/blob/master/tests/e2e/cucumber/features/smoke/uploadResumable.feature - uploading file with 1Gb size

Screenshot 2024-02-11 at 22 07 34

@prohtex
Copy link

prohtex commented Feb 11, 2024

Perhaps a test should be added for large file upload.

There is such a test, and it passed. https://github.com/owncloud/web/blob/master/tests/e2e/cucumber/features/smoke/uploadResumable.feature - uploading file with 1Gb size

Screenshot 2024-02-11 at 22 07 34

I see. I was thinking a little larger. The issue I'm encountering is that file uploads that take more than 30 minutes fail due to token expiry. I can reproduce this on the OCIS continual deployment test server using a 17gb upload. Documented here: owncloud/web#10474

@ScharfViktor
Copy link
Contributor Author

I see. I was thinking a little larger. The issue I'm encountering is that file uploads that take more than 30 minutes fail due to token expiry. I can reproduce this on the OCIS continual deployment test server using a 17gb upload. Documented here: owncloud/web#10474

It seems that instance broken. I tried to do it localy (works fine).

we don't have to upload a large file to e2e-test and wait 30 minutes or more -> decreasing the time IDP_ID_TOKEN_EXPIRATION helps in this case.

@micbar
Copy link
Contributor

micbar commented Feb 26, 2024

oCIS 5.0.0-rc.5

Incremental changes since 5.0.0-rc.4

ocis

  • Bugfix - Fix search response: #7815
  • Bugfix - Fix Content-Disposition header for downloads: #8381
  • Bugfix - Fix an error when move: #8396
  • Bugfix - Fix extended env parser: #8409
  • Bugfix - Graph/drives/permission Expiration date update: #8413
  • Bugfix - Fix search error message: #8444
  • Bugfix - Graph/sharedWithMe align IDs with webdav response: #8467
  • Bugfix - Bump reva to pull in changes to fix recursive trashcan purge: #8505
  • Change - Deprecate sharing cs3 backends: #8478
  • Enhancement - Improve ocis single binary start: #8320
  • Enhancement - Use environment variables in yaml config files: #8339
  • Enhancement - Bump reva: #8340
  • Enhancement - Allow sending multiple user ids in one sse event: #8379
  • Enhancement - Allow to skip service listing: #8408
  • Enhancement - Add a make step to validate the env var annotations: #8436

Web

Reva

@2403905 2403905 unpinned this issue Mar 4, 2024
@ScharfViktor ScharfViktor pinned this issue Mar 4, 2024
@micbar
Copy link
Contributor

micbar commented Mar 13, 2024

🎉 Final Release Candidate 5.0.0-rc.6

Incremental changes since 5.0.0-rc.5

oCIS

  • Bugfix - Fix an error when lock/unlock a public shared file: #8472
  • Bugfix - Fix remove/update share permissions: #8529
  • Bugfix - Fix graph drive invite: #8538
  • Bugfix - We now always select the next clients when autoaccepting shares: #8570
  • Bugfix - Correct the default mapping of roles: #8639
  • Change - Change the default store for presigned keys to nats-js-kv: #8419
  • Enhancement - Graphs endpoint for mounting and unmounting shares: #7885
  • Enhancement - Update to go 1.22: #8586

Reva

Web

Artifacts

GitHub

https://github.com/owncloud/ocis/releases/tag/v5.0.0-rc.6

Docker

docker pull owncloud/ocis:5.0.0-rc.6

@dragotin
Copy link
Contributor

smoke test with arm64 single binary on odroid successful ✔️

@Salipa-Gurung
Copy link
Contributor

Automated smoke test passed in desktop-client with ocis version 5.0.0-rc.6 ✔️
Build: https://drone.owncloud.com/owncloud/client/17866
GUI Report: https://cache.owncloud.com/public/owncloud/client/17866/ocis/guiReportUpload/index.html

@jesmrec
Copy link

jesmrec commented Mar 14, 2024

Mobile:

Some manual testing also done, mainly regarding transfers. As smoke test is passed ✅

It's OK from my side

@ScharfViktor
Copy link
Contributor Author

Compatibility test between 4.0.5 and 5.0.0-rc.1
using enabled Service accounts - looks good ✅

@ScharfViktor
Copy link
Contributor Author

@micbar question about resharing: https://github.com/owncloud/docs-ocis/pull/739/files#diff-04cd79e526c8a7f0162d8fc5bc5fa8c501fe9e5899024268837c05a74349b26cR148-R149

it is enabled in rc.6, I was sure that resharing was disabled (I already checked it) Looks like something is wrong.
I don't see OCIS_ENABLE_RESHARING=true -> but resharing is enabled
Screenshot 2024-03-14 at 14 48 06

@micbar
Copy link
Contributor

micbar commented Mar 14, 2024

Right! Big OMG moment, we just missed to change the default value. Needs to be done now.

@micbar micbar mentioned this issue Mar 14, 2024
9 tasks
@micbar
Copy link
Contributor

micbar commented Mar 18, 2024

All blockers have been resolved.

#8673 we load tested it with the new config.

From an engineering POV this:

  • Doesn't invalidate the smoke testing
  • Doesn't need a new Release candidate

@kulmann @tbsbdr @dragotin this is a GO from the backend and load testing teams.

@dragotin
Copy link
Contributor

Yes, GO! Great! Thanks to the whole team for this big effort 👍

@tbsbdr
Copy link

tbsbdr commented Mar 18, 2024

GO! What lasts a long time will finally be good - really looking forward to doogfood this release ❤️

@kulmann
Copy link
Member

kulmann commented Mar 18, 2024

GO! from web as well!

@pascalwengerter
Copy link
Contributor

Can this issue be closed since v5.0.0 is listed on https://github.com/owncloud/ocis/releases ?

@micbar
Copy link
Contributor

micbar commented Apr 29, 2024

@wkloucek @d7oc Is there going to be some Release 5.0.0 changes on the ocis-charts?

These are the only open items.

@wkloucek
Copy link
Contributor

wkloucek commented Apr 30, 2024

@wkloucek @d7oc Is there going to be some Release 5.0.0 changes on the ocis-charts?

These are the only open items.

You can remove the oCIS chart from this checklist. To my knowledge it's not a product. It's just there. It's supposed to have NO official documentation. So no documentation tasks needed. As long as it's not a product, there will probably also no release of the oCIS Helm Chart.

@ScharfViktor
Copy link
Contributor Author

After agreement with @micbar, we excluded ocis-helm related items from the release template
closing as completed

@ScharfViktor ScharfViktor unpinned this issue May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests