Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you use Harbor? Let us know! #115

Open
michmike opened this issue Oct 24, 2019 · 16 comments
Open

How do you use Harbor? Let us know! #115

michmike opened this issue Oct 24, 2019 · 16 comments
Labels
kind/question Further information is requested

Comments

@michmike
Copy link
Contributor

This really isn't an issue, but we'd love to hear about your use of Harbor so we thought we'd post this to find out more from you.

Some key things of interest may be

  • Number of container images and Helm Charts under management
  • Storage size consumed by Harbor
  • Number of end users (i.e. developers)
  • If you have multiple Harbor environments set up with replication
  • Are you using Harbor today in production or pre-production/sandbox/dev-test (or all of the above)

Please feel free to add a comment below and let us know. We will keep this issue open, making it a posting board for Harbor usage.

thank you in advance,

@michmike
Harbor Core Maintainer

@michmike michmike pinned this issue Oct 24, 2019
@michmike michmike added the kind/question Further information is requested label Oct 24, 2019
@michmike michmike added this to Backlog in Harbor Project Board via automation Oct 24, 2019
@Vivian7755
Copy link

Vivian7755 commented Oct 25, 2019

We have used Harbor in production

@michmike
Copy link
Contributor Author

We have used Harbor in production

@Vivian7755 can you elaborate if you can how many containers are under management and if you can disclose your company? thank you for using Harbor!

@bjethwan
Copy link

Hi @michmike

I am evaluating harbor for the faster disaster recovery time of my k8s clusters.
At present, it takes a lot of time in pulling container images of my internal docker registry.
Do you have any comments or suggestions?

-Thanks

@nlowe
Copy link
Contributor

nlowe commented Oct 29, 2019

Updating this with some stats on how Hyland Software uses Harbor:

  • Number of container images and Helm Charts under management
    • In R&D Hyland has approximately 2400 Tags spread across 670 images in 175 different projects. Approximately 10% of which store helm charts in Harbor
  • Storage size consumed by Harbor
    • In R&D, Hyland's Harbor instance currently stores about 2.5 Terabytes of container and chart data
  • Number of end users (i.e. developers)
    • Approximately 1000 Developers at Hyland interact with Harbor on a day-to-day basis
  • If you have multiple Harbor environments set up with replication
    • There are no plans to setup replication for Harbor in R&D but are evaluating it for use elsewhere in the company.
  • Are you using Harbor today in production or pre-production/sandbox/dev-test (or all of the above)
    • All of the above

@michmike hope that helps!

@michmike
Copy link
Contributor Author

Hi @michmike

I am evaluating harbor for the faster disaster recovery time of my k8s clusters.
At present, it takes a lot of time in pulling container images of my internal docker registry.
Do you have any comments or suggestions?

-Thanks

hi @bjethwan how are you. Can you open an issue (bug) with the delays you are seeing and add lots of details and some statistics. For example

  • Is your harbor registry colocated with your compute cluster
  • size of images
  • latency times observed
  • network connectivity. for example what happens if you try to pull a regular file from the same server, how fast is that?

thank you!

@michmike
Copy link
Contributor Author

michmike commented Nov 3, 2019

Hi @michmike

I am evaluating harbor for the faster disaster recovery time of my k8s clusters.
At present, it takes a lot of time in pulling container images of my internal docker registry.
Do you have any comments or suggestions?

-Thanks

hi @bjethwan absolutely. you can install Harbor locally in your k8s clusters using our Helm chart deployment. then, you can configure Harbor to host your images as they come out of your CI/CD pipeline, or you can configure Harbor replication so that harbor can cache locally all the images you need in that Kubernetes cluster. look at the replication capabilities in our docs. let us know if you have any more questions. thanks

@burdzwastaken
Copy link

burdzwastaken commented Nov 4, 2019

👋 @michmike and the rest of the Harbor team, I work in the Core Platform team at Mulesoft, a Salesforce company and we have been using Harbor in production since v1.7.0 deployed on Kubernetes using a forked version of the Helm chart. We currently run two instances of Harbor, one development test bed for our team and one in production which is serving Images to all of our environments. Our production instance is consumed by around 400+ end users which are authenticated via LDAP (we are testing OIDC however before we go live we require group scoping for projects which I believe is targeted for v1.10.X, goharbor/harbor#8017). As far as replication goes we are looking to use Harbor to replicate to ECR within all of our environments once we are able to upgrade to v1.9.X.

Here are some numbers that I was able to pull from our prod instance:
Number of unique Images: 1000~
Number of Helm Charts: 300~
Storage size: 5.5TB~
Pull Operations: 17 Million~

While we did a PoC to store our helm charts within Harbor unfortunately due to how we use project structure and ChartMuseums depth being static we decided to wait until OCI storage of charts is supported due to our index-cache.yaml growing too large to be consumable efficiently. On the topic of OCI Image support we are really excited by the possibility of storing Images/Charts/OPA policies all in Harbor in the future.

Migrating to Harbor has been a fairly positive experience for our end users however the sheer number of Images being stored certainly has made the UI close to unusable at all times (I know there is a few open issues to address this).

I would like to thank you for supporting a great project and we are always looking for ways we can contribute to Harbor.

@michmike
Copy link
Contributor Author

michmike commented Nov 5, 2019

efficiently

@burdzwastaken your team is a Harbor power user! that's a great testimonial and we look forward to working with your team in the future. If you have links to the exact UI tickets your team, please paste them in here for our team to revisit and discuss with you. cheers!

@burdzwastaken
Copy link

@michmike here were a few tickets of the behaviour we were seeing:
goharbor/harbor#6314
goharbor/harbor#9719

however I am pleased to report that after upgrading to v1.9.2 tonight and removing ChartMuseum (while we await Helm v3 & OCI support in Harbor) we have noticed a huge performance improvement in the UI. is it extremely responsive and we have had no reported issues.

@stonezdj stonezdj unpinned this issue Dec 13, 2019
@guillaumelfv
Copy link

guillaumelfv commented Dec 20, 2019

Hi ! I work at Agoda, an OTA company part of the Booking Holding.

We are running Harbor (v1.7.1) in production right now. We got 5 Harbor registries across 5 different datacenter. Each of them run in HA and deployed through Ansible:

  • CEPH using swift API as backend
  • External Potgresql cluster
  • External Redis cluster
  • 2 Haproxy
  • LDAP auth

We are replicating images from the main Datacenter to the 4 others to be able to do multi-DC deployment. All registries are deployed with clair. Currently not using Harbor Helm Chart repository. We planned to use Notary but it do/did? not support HA so currently Harbor is deployed without it.

We plan to upgrade to latest version available beginning of 2020. Right now we run a lot of custom scripts as we are missing some features from last version available (retention being the main one).

In term of numbers we an disclose:

Our 3 main issues are:

  • replication inter DC failure/resiliency and retry not fitting our needs
  • GC too long on CEPH backend (~19h to complete) blocking any push
  • UI/API slow and for few project impossible to use as too much tags to retrieve/list

Feel free to ask me anything if you need more info about our setup ! Thanks for supporting Harbor project !

@michmike michmike pinned this issue Jan 9, 2020
@mmpei mmpei unpinned this issue Jan 14, 2020
@michmike michmike pinned this issue Jan 15, 2020
@Vad1mo
Copy link
Member

Vad1mo commented Feb 25, 2021

Hello @michmike
We are trying to scale a SaaS around Harbor with container-registry.com in the hope to keep us afloat and one day contribute more to the project. We used Portus (Fork) in the past and only recently switched to Harbor.(Not Fork)

We have a shared instance c8n.io and a dozen dedicated cluster so far with Harbor; We are working on a rollout for a Cloud Provider with 5k clients.

  • Number of container images and Helm Charts under management

Not sure,

  • Storage size consumed by Harbor

Not sure, but it's not much, a few TB maybe. We expected more per customer.

-Number of end users (i.e. developers)

At the moment we have 100+ devs using our service.

@shahidv3
Copy link

Hi ! I work at Agoda, an OTA company part of the Booking Holding.

We are running Harbor (v1.7.1) in production right now. We got 5 Harbor registries across 5 different datacenter. Each of them run in HA and deployed through Ansible:

  • CEPH using swift API as backend
  • External Potgresql cluster
  • External Redis cluster
  • 2 Haproxy
  • LDAP auth

We are replicating images from the main Datacenter to the 4 others to be able to do multi-DC deployment. All registries are deployed with clair. Currently not using Harbor Helm Chart repository. We planned to use Notary but it do/did? not support HA so currently Harbor is deployed without it.

We plan to upgrade to latest version available beginning of 2020. Right now we run a lot of custom scripts as we are missing some features from last version available (retention being the main one).

In term of numbers we an disclose:

  • Number of images: > 8000 tags in 1141 repositories in 75 projects (its a rough low estimation as right now we face this from UI and API goharbor/harbor#6314 i will update if we find a fix)
  • Storage size consumed by Harbor (main DC): 28 Tb

Our 3 main issues are:

  • replication inter DC failure/resiliency and retry not fitting our needs
  • GC too long on CEPH backend (~19h to complete) blocking any push
  • UI/API slow and for few project impossible to use as too much tags to retrieve/list

Feel free to ask me anything if you need more info about our setup ! Thanks for supporting Harbor project !

thanks @guillaumelfv for the details. Can you please let us know how many # of developers consuming harbor.

@guillaumelfv
Copy link

guillaumelfv commented Apr 2, 2021

Hi @shahidv3 !

We recently migrated from 1.10.1 to 2.1.3. We still deploy on VM in HA setup with ansible and we also made the following changes just fyi:

  • Added redis sentinel for HA
  • Use pgbouncer in front of the external Postgres
  • Stop relying on Harbor replication, we replaced all Harbor except the main one by Kraken and will use preheating in the future

Updated numbers:

  • ~130 projects
  • ~2500 images
  • ~1500 developers

We have issues right now with the prometheus exporter https://github.com/c4po/harbor_exporter so we missing some data (nb of tags, pull per minutes...). Hopefully the internal metrics Harbor will soon provide will work better.

@yanji09 yanji09 unpinned this issue Jun 4, 2021
@Timosha
Copy link

Timosha commented Aug 26, 2021

* Use pgbouncer in front of the external Postgres

@guillaumelfv Do you use transaction or session mode in pgboucer?

@guillaumelfv
Copy link

guillaumelfv commented Aug 31, 2021

@Timosha I just checked and we do use the default mode so i think it is set to session mode:

;pool_mode = session

But we had to increase max_client_conn and default_pool_size:

pg_bouncer_max_connections: 2500

pg_bouncer_default_pool_size: 150

Other way we would see the pool being depleted and causing connection to be queued.

@mnnxp
Copy link

mnnxp commented Jun 13, 2023

Thanks to the Harbor Core team for this great solution! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Further information is requested
Projects
Development

No branches or pull requests

10 participants