We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug If I remove the command directive in the API section of the docker-compose the server fails to start
command
To Reproduce Steps to reproduce the behavior:
Expected behavior Server starts
Aleph version 3.15.5
Screenshots If applicable, add screenshots to help explain your problem.
Additional context I've already seen the issues #3611, #3606 and checked the main version of the docker compose. My current compose is
main
--- version: "3.2" services: postgres: image: postgres:10.0 volumes: - database:/var/lib/postgresql/data environment: POSTGRES_USER: XXXXXXXXXX POSTGRES_PASSWORD: XXXXXXXXX POSTGRES_DATABASE: XXXXXXXXXX restart: on-failure elasticsearch: image: ghcr.io/alephdata/aleph-elasticsearch:$ELASTICSEARCH_VERSION hostname: elasticsearch environment: - discovery.type=single-node volumes: - elasticsearch:/usr/share/elasticsearch/data restart: on-failure env_file: - .env redis: image: redis:alpine command: ["redis-server", "--save", "3600", "10"] volumes: - redis:/data restart: on-failure ingest-file: image: ghcr.io/alephdata/ingest-file:$INGEST_FILE_VERSION tmpfs: - /tmp:mode=777 volumes: - app:/data depends_on: - postgres - redis restart: on-failure environment: WORKER_THREADS: 0 env_file: - .env worker: image: ghcr.io/alephdata/aleph:$ALEPH_VERSION command: aleph worker restart: on-failure depends_on: - postgres - elasticsearch - redis - ingest-file tmpfs: - /tmp volumes: - app:/data env_file: - .env shell: image: ghcr.io/alephdata/aleph:$ALEPH_VERSION command: /bin/bash depends_on: - postgres - elasticsearch - redis - ingest-file - worker tmpfs: - /tmp volumes: - app:/data # - "./mappings:/aleph/mappings" - "~:/host" env_file: - .env api: image: ghcr.io/alephdata/aleph:$ALEPH_VERSION # command: gunicorn -w 6 -b 0.0.0.0:8000 --log-level debug --log-file - aleph.wsgi:app expose: - 8000 depends_on: - postgres - elasticsearch - redis - worker - ingest-file tmpfs: - /tmp volumes: - app:/data env_file: - .env restart: on-failure ui: image: ghcr.io/alephdata/aleph-ui-production:$ALEPH_VERSION depends_on: - api ports: - "8080:8080" restart: on-failure env_file: - .env volumes: app: driver: local driver_opts: type: none o: bind device: /aleph/app database: driver: local driver_opts: type: none o: bind device: /aleph/database redis: driver: local driver_opts: type: none o: bind device: /aleph/redis elasticsearch: driver: local driver_opts: type: none o: bind device: /aleph/elasticsearch
with the next .env file:
.env
ALEPH_VERSION=3.15.5 INGEST_FILE_VERSION=3.20.2 ELASTICSEARCH_VERSION=3bb5dbed97cfdb9955324d11e5c623a5c5bbc410 ALEPH_SECRET_KEY=XXXXXX ALEPH_APP_TITLE=XXXXXX ALEPH_APP_NAME=XXXXXX ALEPH_UI_URL=XXXXXX ALEPH_URL_SCHEME=https ALEPH_SAMPLE_SEARCHES=Vladimir Putin:TeliaSonera ALEPH_ADMINS=XXXXXX ALEPH_SINGLE_USER=false ALEPH_PASSWORD_LOGIN=false ALEPH_OAUTH=true ALEPH_OAUTH_HANDLER=XXXXXX ALEPH_OAUTH_KEY=XXXXXX ALEPH_OAUTH_SECRET=XXXXXX ALEPH_OAUTH_METADATA_URL=XXXXX ALEPH_OCR_DEFAULTS=eng ALEPH_DEBUG=true LOG_FORMAT=JSON # TEXT or JSON PROMETHEUS_ENABLED=true
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
If I remove the
command
directive in the API section of the docker-compose the server fails to startTo Reproduce
Steps to reproduce the behavior:
command
directive in the API section of the docker-composeExpected behavior
Server starts
Aleph version
3.15.5
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
I've already seen the issues #3611, #3606 and checked the
main
version of the docker compose. My current compose iswith the next
.env
file:The text was updated successfully, but these errors were encountered: