Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lightspeed app server runs into CrashLoopBackOff state #40

Open
vprashar2929 opened this issue Mar 19, 2024 · 1 comment
Open

Lightspeed app server runs into CrashLoopBackOff state #40

vprashar2929 opened this issue Mar 19, 2024 · 1 comment

Comments

@vprashar2929
Copy link

When olsconfig instance is created lightspeed-app-server crashes every time. The logs show that it failed to establish a connection with the Redis server.

App server logs: err.log

olsconfig:

apiVersion: v1
data:
  olsconfig.yaml: |
    llm_providers:
    - credentials_path: /etc/apikeys/openai-api-keys/apitoken
      models:
      - name: gpt-3.5-turbo-1106
      name: OpenAI
    ols_config:
      conversation_cache:
        redis:
          host: lightspeed-redis-server.openshift-lightspeed.svc
          max_memory: 1024mb
          max_memory_policy: allkeys-lru
          port: 6379
        type: redis
      logging_config:
        app_log_level: INFO
        lib_log_level: INFO
kind: ConfigMap

Pods in openshift-lightspeed ns:

NAME                                                      READY   STATUS             RESTARTS        AGE
lightspeed-app-server-6c69fbb795-trdln                    0/1     CrashLoopBackOff   6 (4m17s ago)   11m
lightspeed-console-plugin-c696fc775-c6wc6                 1/1     Running            0               11m
lightspeed-console-plugin-c696fc775-pb9hn                 1/1     Running            0               11m
lightspeed-operator-controller-manager-84ffb98448-7tm9p   2/2     Running            0               20m
lightspeed-redis-server-7fd5bdff98-887mv                  1/1     Running            0               11m

redis-server logs:

1:C 19 Mar 2024 10:40:27.285 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 19 Mar 2024 10:40:27.285 * Redis version=7.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 19 Mar 2024 10:40:27.285 * Configuration loaded
1:M 19 Mar 2024 10:40:27.285 * monotonic clock: POSIX clock_gettime
1:M 19 Mar 2024 10:40:27.285 * Running mode=standalone, port=6379.
1:M 19 Mar 2024 10:40:27.285 * Server initialized
1:M 19 Mar 2024 10:40:27.285 * Ready to accept connections tcp
oc get olsconfig -o yaml
apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
metadata:
  creationTimestamp: "2024-03-19T10:40:18Z"
  generation: 1
  labels:
    app.kubernetes.io/created-by: lightspeed-operator
    app.kubernetes.io/instance: olsconfig-sample
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/name: olsconfig
    app.kubernetes.io/part-of: lightspeed-operator
  name: cluster
  resourceVersion: "169445"
  uid: d1357381-480f-4eda-acd8-537a2f108505
spec:
  llm:
    providers:
    - credentialsSecretRef:
        name: openai-api-keys
      models:
      - name: gpt-3.5-turbo-1106
      name: OpenAI
  ols:
    conversationCache:
      redis:
        maxMemory: 1024mb
        maxMemoryPolicy: allkeys-lru
      type: redis
    deployment:
      replicas: 1
    enableDeveloperUI: false
    logLevel: INFO
@vbelouso
Copy link
Contributor

Hi @vprashar2929
The issue was fixed in #35

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants