Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick Start Permission Issue: Can't Ingest #16432

Open
AlexMercedCoder opened this issue May 10, 2024 · 1 comment
Open

Quick Start Permission Issue: Can't Ingest #16432

AlexMercedCoder opened this issue May 10, 2024 · 1 comment

Comments

@AlexMercedCoder
Copy link

Supposedly, this was was fixed by https://github.com/apache/druid/pull/11299/files

But I am getting the same issues of ingestion failing due to not being able to create a directory in opt shared:

here is my docker-compose.yml

version: "3"

services:
  # Nessie Catalog Server Using In-Memory Store
  nessie:
    image: projectnessie/nessie:latest
    container_name: nessie
    networks:
      dremio-druid-superset:
    ports:
      - 19120:19120

  # Minio Storage Server
  minio:
    image: minio/minio:latest
    container_name: minio
    environment:
      - MINIO_ROOT_USER=admin
      - MINIO_ROOT_PASSWORD=password
      - MINIO_DOMAIN=storage
      - MINIO_REGION_NAME=us-east-1
      - MINIO_REGION=us-east-1
    networks:
      dremio-druid-superset:
    ports:
      - 9001:9001
      - 9000:9000
    command: ["server", "/data", "--console-address", ":9001"]

  # Dremio
  dremio:
    platform: linux/x86_64
    image: dremio/dremio-oss:latest
    ports:
      - 9047:9047
      - 31010:31010
      - 32010:32010
    container_name: dremio
    environment:
      - DREMIO_JAVA_SERVER_EXTRA_OPTS=-Dpaths.dist=file:///opt/dremio/data/dist
    networks:
      dremio-druid-superset:

  # Apache Druid
  postgres:
    container_name: postgres
    image: postgres:latest
    ports:
      - "5433:5432"
    volumes:
      - metadata_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=FoolishPassword
      - POSTGRES_USER=druid
      - POSTGRES_DB=druid
    networks:
      dremio-druid-superset:

  # Need 3.5 or later for container nodes
  zookeeper:
    container_name: zookeeper
    image: zookeeper:3.5.10
    ports:
      - "2181:2181"
    environment:
      - ZOO_MY_ID=1
    networks:
      dremio-druid-superset:

  coordinator:
    image: apache/druid:29.0.1
    container_name: coordinator
    volumes:
      - druid_shared:/opt/shared
      - coordinator_var:/opt/druid/var
    depends_on:
      - zookeeper
      - postgres
    ports:
      - "8081:8081"
    command:
      - coordinator
    env_file:
      - environment
    networks:
      dremio-druid-superset:

  broker:
    image: apache/druid:29.0.1
    container_name: broker
    volumes:
      - broker_var:/opt/druid/var
    depends_on:
      - zookeeper
      - postgres
      - coordinator
    ports:
      - "8082:8082"
    command:
      - broker
    env_file:
      - environment
    networks:
      dremio-druid-superset:

  historical:
    image: apache/druid:29.0.1
    container_name: historical
    volumes:
      - druid_shared:/opt/shared
      - historical_var:/opt/druid/var
    depends_on: 
      - zookeeper
      - postgres
      - coordinator
    ports:
      - "8083:8083"
    command:
      - historical
    env_file:
      - environment
    networks:
      dremio-druid-superset:

  middlemanager:
    image: apache/druid:29.0.1
    container_name: middlemanager
    volumes:
      - druid_shared:/opt/shared
      - middle_var:/opt/druid/var
    depends_on: 
      - zookeeper
      - postgres
      - coordinator
    ports:
      - "8091:8091"
      - "8100-8105:8100-8105"
    command:
      - middleManager
    env_file:
      - environment
    networks:
      dremio-druid-superset:

  router:
    image: apache/druid:29.0.1
    container_name: router
    volumes:
      - router_var:/opt/druid/var
    depends_on:
      - zookeeper
      - postgres
      - coordinator
    ports:
      - "8888:8888"
    command:
      - router
    env_file:
      - environment
    networks:
      dremio-druid-superset:

  # Superset
  superset:
    image: apache/superset
    container_name: superset
    networks:
      dremio-druid-superset:
    ports:
      - 8088:8088

networks:
  dremio-druid-superset:

volumes:
  metadata_data:
  middle_var:
  historical_var:
  broker_var:
  coordinator_var:
  router_var:
  druid_shared:

I pretty much copied what was in the repo here https://github.com/apache/druid/blob/29.0.1/distribution/docker/docker-compose.yml. It all starts up fine but I try to do a simple "paste data" ingestion of some comma seperated data:

day, amount
1,1
2,2
3,4
4,8
5,16
6,32
```

I get an error and see this in the middlemanager logs

```
middlemanager  | 2024-05-10T21:20:59,508 INFO [forking-task-runner-1] org.apache.druid.indexing.overlord.ForkingTaskRunner - Logging task query-64c248f2-1516-422c-ba60-79d01f644de8-worker0_0 output to: var/druid/task/slot1/query-64c248f2-1516-422c-ba60-79d01f644de8-worker0_0/log
middlemanager  | 2024-05-10T21:20:59,509 DEBUG [qtp514556983-73] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=8&hash=1715376059501&timeout=180000 HTTP/1.1 204
middlemanager  | 2024-05-10T21:20:59,554 DEBUG [SegmentChangeRequestHistory] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=9&hash=1715376059503&timeout=180000 HTTP/1.1 200
middlemanager  | 2024-05-10T21:21:22,349 INFO [forking-task-runner-1] org.apache.druid.indexing.overlord.ForkingTaskRunner - Exception caught during execution
middlemanager  | org.apache.druid.java.util.common.IOE: Cannot create directory [/opt/shared/indexing-logs]
middlemanager  |        at org.apache.druid.java.util.common.FileUtils.mkdirp(FileUtils.java:488) ~[druid-processing-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.common.tasklogs.FileTaskLogs.pushTaskLog(FileTaskLogs.java:53) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner.waitForTaskProcessToComplete(ForkingTaskRunner.java:517) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:404) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:171) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
middlemanager  |        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
middlemanager  |        at java.lang.Thread.run(Thread.java:840) ~[?:?]
middlemanager  | 2024-05-10T21:21:22,355 INFO [forking-task-runner-1] org.apache.druid.indexing.overlord.ForkingTaskRunner - Removing task directory: var/druid/task/slot1/query-64c248f2-1516-422c-ba60-79d01f644de8-worker0_0
middlemanager  | 2024-05-10T21:21:22,449 DEBUG [SegmentChangeRequestHistory] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=10&hash=1715376059553&timeout=180000 HTTP/1.1 200
middlemanager  | 2024-05-10T21:21:22,471 INFO [WorkerTaskManager-NoticeHandler] org.apache.druid.indexing.worker.WorkerTaskManager - Task [query-64c248f2-1516-422c-ba60-79d01f644de8-worker0_0] completed with status [FAILED].
middlemanager  | 2024-05-10T21:21:24,053 INFO [forking-task-runner-0] org.apache.druid.indexing.overlord.ForkingTaskRunner - Exception caught during execution
middlemanager  | org.apache.druid.java.util.common.IOE: Cannot create directory [/opt/shared/indexing-logs]
middlemanager  |        at org.apache.druid.java.util.common.FileUtils.mkdirp(FileUtils.java:488) ~[druid-processing-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.common.tasklogs.FileTaskLogs.pushTaskLog(FileTaskLogs.java:53) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner.waitForTaskProcessToComplete(ForkingTaskRunner.java:517) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:404) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:171) ~[druid-indexing-service-29.0.1.jar:29.0.1]
middlemanager  |        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) ~[guava-32.0.1-jre.jar:?]
middlemanager  |        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
middlemanager  |        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
middlemanager  |        at java.lang.Thread.run(Thread.java:840) ~[?:?]
middlemanager  | 2024-05-10T21:21:24,055 INFO [forking-task-runner-0] org.apache.druid.indexing.overlord.ForkingTaskRunner - Removing task directory: var/druid/task/slot0/query-64c248f2-1516-422c-ba60-79d01f644de8
middlemanager  | 2024-05-10T21:21:24,132 DEBUG [SegmentChangeRequestHistory] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=11&hash=1715376082442&timeout=180000 HTTP/1.1 200
middlemanager  | 2024-05-10T21:21:24,148 INFO [WorkerTaskManager-NoticeHandler] org.apache.druid.indexing.worker.WorkerTaskManager - Task [query-64c248f2-1516-422c-ba60-79d01f644de8] completed with status [FAILED].
middlemanager  | 2024-05-10T21:24:19,253 DEBUG [SegmentChangeRequestHistory] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=12&hash=1715376084130&timeout=180000 HTTP/1.1 200
middlemanager  | 2024-05-10T21:24:19,257 DEBUG [qtp514556983-82] org.apache.druid.jetty.RequestLog - 192.168.48.4 GET //192.168.48.7:8091/druid-internal/v1/worker?counter=13&hash=1715376259251&timeout=180000 HTTP/1.1 204
```

I'm using the exact same version of the environment file from the repo here: https://github.com/apache/druid/blob/29.0.1/distribution/docker/environment

Is there something I'm missing? Cause I've read everything I could find and supposedly the named volumes were changed to fix this, but I seem to still be seeing the same problem.
@AlexMercedCoder
Copy link
Author

I was able to get this working by adding

user:root to the middlemanager, not an ideal solution but for anyone else who runs into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant