Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Deployment and Configuration of Kubernetes Environment with External Services and openim Application #2194

Open
2 tasks done
cubxxw opened this issue Apr 22, 2024 · 0 comments
Labels
area/workload-api/deployment Issues or PRs related to workload API's deployment features kind/feature Categorizes issue or PR as related to a new feature.

Comments

@cubxxw
Copy link
Contributor

cubxxw commented Apr 22, 2024

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've discussed this feature request in the OpenIMSDK Slack and got positive feedback

Is this feature request related to a problem?

✅ Yes

Problem Description

Here is a well-formatted issue description in English for your project:

1. Kubernetes Environment Setup

Using the sealos run command, deploy a Kubernetes cluster that includes master and worker nodes, along with necessary Kubernetes components such as Helm, Calico, CoreDNS, and the cert-manager.

# Deploy Kubernetes cluster and components
sealos run --masters 172.31.64.100 --nodes 172.31.64.101,172.31.64.102,172.31.64.103 labring/kubernetes:v1.25.6 labring/helm:v3.11.3 labring/calico:v3.24.1 labring/coredns:v0.0.1 --passwd 'Fanux#123'

# Deploy cert-manager
sealos run labring/cert-manager:v1.12.1

2. Configuration of External Services

Describe that in a real production or pre-production environment, services like Redis and MongoDB are not run within Kubernetes but are set up externally by the users. Kafka might be an exception, running on Kubernetes.

# Deploy Kafka
sealos run labring/bitnami-kafka:v22.1.5 set xxx

# Deploy Istio
sealos run labring/istio:1.16.2-min

3. Deployment of openim

Deploy the openim application and configure related environment variables and service links such as MongoDB, Redis, Minio, and Kafka.

sealos run --env publicIP=192.168.10.100 labring/openim:3.6 set replicas=3, sc=xxx, mogouri=[xxx], uname=xxx, passwd=xxxx, redis_uri=[xxxx], uname=xxx, passwd=xxx, minio_uri=[xxxx], uname=xxxx, passwd=xxx, kafka_uri=[xxxx]

Expected Deployment Outcome

Ultimately, the openim3.6 should be deployed as three replicas, each containing 11 services running normally.

openim3.6-0    11/11 running
openim3.6-1    11/11 running
openim3.6-2    11/11 running

Additional Feature: Traefik Ingress

Expectations for the deployment of traefik-ingress, a modern HTTP reverse proxy and load balancer for Kubernetes, which facilitates the management of inbound connections and service routing.

Solution Description

Kubernetes Cluster Deployment

The Kubernetes cluster was set up using the sealos run command, specifying the master and worker nodes, and including essential components such as Helm, Calico, CoreDNS, and cert-manager. This ensured a robust and scalable cluster capable of handling our application and service requirements.

External Services Configuration

To mimic a production environment, external services like Redis and MongoDB were set up outside of Kubernetes for enhanced performance and security. Kafka, which benefits from Kubernetes' orchestration, was deployed within the cluster alongside Istio, which manages microservice communication and traffic flow.

Deployment of openim

The openim application was deployed with specific configurations to connect to the external services MongoDB, Redis, Minio, and Kafka. Environmental variables were set to customize the deployment according to our networking, storage, and security needs. This setup was aimed to ensure high availability and fault tolerance.

Additional Feature: Traefik Ingress

Traefik Ingress was deployed as a dynamic and efficient solution for managing inbound connections and routing them to appropriate services. Its deployment was configured to automatically discover any changes in the services or their configurations, aiding in seamless scaling and management.

Benefits

Enhanced Scalability and Flexibility

The deployment of Kubernetes with dynamic scaling and management capabilities allows for seamless scalability. This setup supports fluctuating workloads by automatically adjusting resources, ensuring the environment can efficiently handle increases or decreases in demand without manual intervention.

Improved Reliability and Availability

By deploying critical external services such as MongoDB and Redis outside of Kubernetes, the architecture leverages their inherent stability and optimized performance. This separation minimizes the risk of cascading failures within the Kubernetes cluster, thereby enhancing overall system reliability and ensuring high availability of services.

Streamlined Operations and Maintenance

The inclusion of Helm for package management simplifies the deployment and updates of applications, reducing operational overhead. Calico provides secure network connectivity for our containers, while CoreDNS resolves service names within the cluster, facilitating smooth inter-service communication.

Robust Security with Automated Certificates

Integrating cert-manager automates the management of TLS certificates, significantly enhancing security. This automation ensures that communications within the cluster are encrypted and authenticated, safeguarding against unauthorized data access.

Effective Service Management with Istio

Istio’s service mesh architecture offers advanced traffic management, load balancing, and secure service-to-service communication. It provides observability into the application's microservices, which is crucial for detecting and rectifying issues promptly, thus minimizing downtime.

High Availability and Fault Tolerance with openim

Deploying openim with multiple replicas and linking it with resilient external services ensures that the application remains available even if one or more components fail. This setup provides robust fault tolerance, maintaining service continuity and data integrity.

Optimized Traffic Handling with Traefik Ingress

Traefik Ingress acts as a dynamic reverse proxy and load balancer, optimizing the handling of inbound traffic. Its capability to automatically detect changes in the service or its configuration enhances responsiveness and agility in managing traffic flows, improving response times and user satisfaction.

Potential Drawbacks

No response

Additional Information

No response

@cubxxw cubxxw added kind/feature Categorizes issue or PR as related to a new feature. area/workload-api/deployment Issues or PRs related to workload API's deployment features labels Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/workload-api/deployment Issues or PRs related to workload API's deployment features kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

1 participant