Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

X-Forwarded-For does not contain the IP of the original caller when caddy in docker and client uses IPv6 #4339

Closed
bb opened this issue Sep 10, 2021 · 6 comments
Labels
needs info 📭 Requires more information

Comments

@bb
Copy link

bb commented Sep 10, 2021

This might be a follow-up to #3661, as my observations are at least similar, if not same.

Caddy Version

v2.4.5 h1:P1mRs6V2cMcagSPn+NWpD+OEYUYLIf6ecOa48cFGeUg= (issue also present in earlier versions)
running the standard docker container

Host is Ubuntu 20.04.3 LTS
Docker Version 20.10.7

Configs

DNS

testing.example.org as BOTH an A record (IPv4) and an AAAA record (IPv6) (Domainname obfuscated, obviously)

CaddyFile

{
	admin caddy:2019
	servers :443 {
		protocol {
			experimental_http3
		}
	}

	servers :80 {
		protocol {
			allow_h2c
		}
	}
}

testing.example.org {
	reverse_proxy testingcontainer
}

/etc/docker/daemon.json

(Be sure to enable ipv6 here...)

{
  "metrics-addr" : "127.0.0.1:9323",
  "experimental" : true,
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "live-restore": true,
  "registry-mirrors": ["https://mirror.gcr.io"]
}

compose.yaml

version: "3.9"

services:
  caddy:
    image: caddy:2
    restart: unless-stopped
    container_name: caddy
    ports:
      - "80:80"
      - "80:80/udp"
      - "443:443"
      - "443:443/udp"
      - "127.0.0.1:2019:2019"
    volumes:
      - ./data/etc_caddy:/etc/caddy
      - ./data/caddy_data:/data
      - ./data/caddy_config:/config
      - ./logs:/logs
    networks:
      - reverseproxy-net

networks:
  reverseproxy-net:
    name: reverseproxy-net

Partial output of docker inspect caddy, reverseproxy-net:

"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.10",

So .10 is the Upstream address which the target service actually sees when Caddy connects, .1 is the Docker-internal virtual gateway.

Reverse Proxy Target container

Nginx

I initially observed the issue in a container based on phusion/passenger-ruby30 (which is basically an nginx listening on :80) where I modified the log format:

	log_format  main  'remote_addr: $remote_addr forwarded-for: $http_x_forwarded_for - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent"';
	access_log /var/log/nginx/access.log main;

Netcat

To be sure it's not nginx modifying something without me knowing, I also tried nc.

while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)\n\n\n" | nc -l  80; done

(This does not work as good as I expected, I did ctrl-c and start again after basically every request).

Testing and observed results

Client:
(curlie is just a wrapper around curl, think curl -i plus pretty printing)

# request using IPv6
curlie https://testing.example.org
# request using IPv4
curlie --ipv4 https://testing.example.org

Nginx

First line is requested with IPv6, second line with IPv4.

==> /var/log/nginx/access.log <==
remote_addr: 172.18.0.10 forwarded-for: 172.18.0.1 - [10/Sep/2021:15:56:15 +0200] "GET / HTTP/1.1" 200 7953 "-" "curl/7.64.1"
remote_addr: 172.18.0.10 forwarded-for: 94.123.123.123 - [10/Sep/2021:15:57:15 +0200] "GET / HTTP/1.1" 200 7627 "-" "curl/7.64.1"

(real client IP changed to 94.123.123.123 in second line above)

When the client does a request, the remote addr is always the caddy container's IP, as expected. ✅
When the client's request is done using IPv4, the X-Forwarded-For header is set correctly. ✅
When the client's request is done using IPv6, the X-Forwarded-For header is the one of the Docker-internal network gateway, not to the client IP

Netcat

This is just for completeness, to rule out Nginx as culprit:

# client requests using IPv6
curlie https://testing.example.org

# server:
GET / HTTP/1.1
Host: testing.example.org
User-Agent: curl/7.64.1
Accept: application/json, */*
X-Forwarded-For: 172.18.0.1
X-Forwarded-Proto: https
Accept-Encoding: gzip

❌ IPv6: wrong forwarded-for

# client requests using IPv4
curlie --ipv4 https://testing.example.org


# server:
GET / HTTP/1.1
Host: testing.example.org
User-Agent: curl/7.64.1
Accept: application/json, */*
X-Forwarded-For: 94.123.123.123
X-Forwarded-Proto: https
Accept-Encoding: gzip

✅ IPv4: correct forwarded-for

@francislavoie
Copy link
Member

Please enable the debug global option in Caddy, and check the container's logs. You can see the addresses and headers from Caddy's perspective.

I'm not sure I understand the issue here, I'd need to see the logs to better understand.

Keep in mind that when using Docker, it may use a userland proxy, which would make the remote address on TCP packets look like they're coming from Docker itself and not from the real client.

@francislavoie francislavoie added the needs info 📭 Requires more information label Sep 10, 2021
@bb
Copy link
Author

bb commented Sep 10, 2021

Thanks to your pointer, I took a closer look at the docker networking. You're right, Docker does some 'magic': Exposing a Port on the Host while the container is in a bridge network seems to be transparently forwarded for IPv4 but not for IPv6, so Caddy could only see the Docker gateway.

I created a separate Caddy-in-Docker instance on :81 because I didn't want to stop the main server and first reproduced the issue described above:

Here's the debug log from docker-compose logs -ft when the Caddyfile contains the global debug:

IPv6:

caddytesting | 2021-09-10T17:36:22.201061988Z {"level":"debug","ts":1631295382.200958,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"testingcontainer:80","request":{"remote_addr":"172.18.0.1:50044","proto":"HTTP/1.1","method":"GET","host":"testing.example.org:81","uri":"/","headers":{"X-Forwarded-Proto":["http"],"X-Forwarded-For":["172.18.0.1"],"User-Agent":["curl/7.64.1"],"Accept":["application/json, */*"]}},"headers":{},"status":200}

IPv4:

caddytesting | 2021-09-10T17:37:04.499585805Z {"level":"debug","ts":1631295424.4994848,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"testingcontainer:80","request":{"remote_addr":"94.123.123.123:23551","proto":"HTTP/1.1","method":"GET","host":"testing.example.org:81","uri":"/","headers":{"X-Forwarded-For":["94.123.123.123"],"User-Agent":["curl/7.64.1"],"Accept":["application/json, */*"],"X-Forwarded-Proto":["http"]}},"headers":{},"status":200}

Then I switched to network_mode: host and the X-Forwarded-For header shows an IPv6 address as expected 😊

So, it's not a Caddy issue but an issue due to that Docker proxy. Sorry for bothering you.

@bb bb closed this as completed Sep 10, 2021
@bb
Copy link
Author

bb commented Sep 12, 2021

For those struggling the same setting it up correctly, I want to share my key learnings which made it work now:

Docker Daemon

  1. Enable not only ipv6 but also ip6tables in /etc/docker/daemon.json
  2. Disable userland-proxy in /etc/docker/daemon.json (Exactly @francislavoie mentioned above, this was the part which made IPv6 "work" but not the way it should be.
  3. Restart dockerd. Reload is not enough as only a small subset of the settings are reload-capable.

This is my docker host's /etc/docker/daemon.json:

{
  "metrics-addr" : "127.0.0.1:9323",
  "experimental" : true,
  "ipv6": true,
  "ip6tables": true,
  "userland-proxy": false,
  "fixed-cidr-v6": "fd00:1234:5678::/48",
  "live-restore": true,
  "registry-mirrors": ["https://mirror.gcr.io"]
}

I think any ULA is fine for fixed-cidr-v6. Metrics-addr, live-restore and registry mirrors are not relevant for this topic, but maybe you're interested.

Caddy network in docker-compose

  1. You need to add enable_ipv6: true to the network
  2. You need to add a subnet to the network, otherwise there'll be the error ERROR: could not find an available, non-overlapping IPv6 address pool among the defaults to assign to the network
    I don't know if the subnet should be with the fixed-cidr-v6 above or not... I tried with a different one and it worked...
version: "3.9"

services:
  caddy:
    image: caddy:2
    restart: unless-stopped
    container_name: caddy
    ports:
      - "80:80"
      - "80:80/udp"
      - "443:443"
      - "443:443/udp"
      - "127.0.0.1:2019:2019"
    volumes:
      - ./data/etc_caddy:/etc/caddy
      - ./data/caddy_data:/data
      - ./data/caddy_config:/config
      - ./logs:/logs
    networks:
      - caddy
networks:
  caddy:
    name: caddy
    driver: bridge
    enable_ipv6: true
    ipam:
      driver: default
      config:
      - subnet: fd0c:add1::/56

Additional information

I found the Readme of https://github.com/robbertkl/docker-ipv6nat/ very, very helpful, just to learn the background. Thanks a lot @robbertkl

Luckily, the ipv6nat project is going to be obsolete and everything already worked for me using only docker's build-in functionality. See robbertkl/docker-ipv6nat#65 for details (and possible pitfalls, e.g. with wireguard)

@polarathene
Copy link

This response is just an update of current state with Docker if it is helpful to anyone 👍

Summary

  • userland-proxy: false does not seem to be required / helpful.
  • ipv6 + fixed-cidr-v6 are only required in /etc/docker/daemon.json if you need IPv6 support on the default Docker bridge (docker-compose does not use that as a default network).
  • ip6tables: true and an IPv6 subnet enabled network are required.

Below documents the environment used and provides several configs / examples that may clear up any concerns or confusion for netizens landing here :)


Disable userland-proxy in /etc/docker/daemon.json, this was the part which made IPv6 "work" but not the way it should be.

I'm not able to see a benefit for userland-proxy: false.

This is with an IPv6 capable VPS (Vultr) running Ubuntu 22.10 with Docker Engine 20.10.22 (Dec 2022, containerd 1.6.14, runc 1.1.4) and Docker Compose 2.14.1 (this is the newer docker compose command, not the previous docker-compose command it replaced and now aliases AFAIK).

There doesn't appear to be any additional fixes at a glance for the Docker Engine release notes since the 20.10.7 (June 2021) version reported at the top of this issue. So I'm not sure why I don't have an issue with the Docker daemon default userland-proxy: true mentioned here.


userland-proxy: false gotchas

This is an example of userland-proxy: false not working locally (it also causes http://[::1] to timeout) when querying from a NIC with an externally reachable IP.

  • However, it doesn't make a difference with requests from external clients, so not really relevant?
  • The below example uses a traefik container, but similar can be observed with Caddy (shown further down).
# Replace IP 192.168.42 to one relevant on your machine:
$ echo '{ "userland-proxy": true }' > /etc/docker/daemon.json \
  && systemctl restart docker \
  && docker run --rm -d -p 80:80 traefik/whoami \
  && curl -s http://192.168.1.42 | grep RemoteAddr

# Correct output for `userland-proxy: true` (Interface IP, loopback however will be the Docker network gateway IP):
RemoteAddr: 192.168.1.42


$ echo '{ "userland-proxy": true }' > /etc/docker/daemon.json \
  && systemctl restart docker \
  && docker run --rm -d -p 80:80 traefik/whoami \
  && curl -s http://192.168.1.42 | grep RemoteAddr

# Correct output for `userland-proxy: false` (Docker network gateway IP):
RemoteAddr: 172.23.0.1


# NOTE: If `userland-proxy: false` has been set, it will break the expectation of changing to `userland-proxy: true` due to a bug in Docker:
# https://github.com/moby/moby/issues/44721#issuecomment-1368603067
`daemon.json` config for reference

/etc/docker/daemon.json:

{ 
  "ipv6": true,
  "fixed-cidr-v6": "fd00:cafe:babe:1234::/64",
  "ip6tables": true,
  "experimental": true,
  "userland-proxy": true
}

Notes about daemon.json

  • userland-proxy: true is presently the default (and thus not necessary to add into daemon.json), but there is presently discussions still open about switching to disabled by default in future.
  • experimental: true is presently required for ip6tables: true to work. These two are really all you need here.
  • ipv6 and fixed-cidr-v6 are only required to configure an IPv6 subnet for the default Docker bridge network. When using docker-compose this is not used for the default network it creates.
  • For the IPv6 ULA, fdxx:xxxx:xxxx::/48 covers the routing prefix, the next 16 bits define the subnet ID, thus fdxx:xxxx:xxxx:yyyy::/64 is preferable, where all containers will connect with an interface ID as part of their address within that assigned yyyy subnet (as the IPv6 ULA wikipedia example shows).

Testing

If you've not added ipv6: true and a fixed-cidr-v6 into the /etc/docker/daemon.json config. You can instead use a custom network with an IPv6 subnet enabled (just be sure to have ip6tables + experimental enabled in the daemon.json config).

Using just the docker CLI:

# Setup:
docker network create --ipv6 --subnet fd00:cafe:babe:1234::/64 test-network
docker run --rm -d -p 80:80 --network test-network traefik/whoami

# Verify:
# EXTERNALLY_FACING_IP here could be your IPv6 address for your server,
# eg: `[2001:19f0:7001:13c9:5400:4ff:fe41:5e06]`
curl -s "http://${EXTERNALLY_FACING_IP}" | grep RemoteAddr

or with docker-compose config:

services:
  reverse-proxy:
    image: traefik/whoami
    ports:
      - '80:80'

networks:
  # Overrides the default network created + attached for each service above
  default:
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:cafe:babe:1234::/64

# To use the `daemon.json` default bridge with IPv6 instead of the custom default network above,
# Add `network_mode: bridge` to a service instead.

ip6tables: true in /etc/docker/daemon.json will handle the NAT for returning the client IPv6 address instead of the IPv6 gateway in the docker network used for a container.


Example with Caddy + docker-compose

caddy-data/Caddyfile:

{
  auto_https off
  debug
}

:80 {
  # debug logs will include the same response values,
  # use `docker compose logs reverse-proxy` if you prefer this:
  log {
    output stdout
  }

  reverse_proxy :3000
}

:3000 {
  respond "
Caddy received:
- Host (Server IP): {host}
- Remote (Client IP): {remote_host}
- Forwarded From (via a reverse-proxy): {header.X-Forwarded-For}
"
}

compose.yaml:

services:
  reverse-proxy:
    image: caddy:alpine
    #network_mode: host
    networks:
      - caddy
    ports:
      - '80:80'
      - '3000:3000'
    volumes:
      - ./caddy-data/Caddyfile:/etc/caddy/Caddyfile

networks:
  caddy:
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:cafe:babe:1234::/64

Client requests on the local Docker host:

# `userland-proxy: false` for IPv4 and IPv6 with `ip6tables: true`:
$ curl http://45.77.178.29

Caddy received:
- Host (Server IP): 45.77.178.29
- Remote (Client IP): 127.0.0.1
- Forwarded From (via a reverse-proxy): 172.18.0.1

$ curl http://[2001:19f0:7001:13c9:5400:04ff:fe41:5e06]

Caddy received:
- Host (Server IP): [2001:19f0:7001:13c9:5400:4ff:fe41:5e06]
- Remote (Client IP): 127.0.0.1
- Forwarded From (via a reverse-proxy): fd00:cafe:babe:1234::1

# `userland-proxy: true` properly forwards the actual client IP:
$ curl http://[2001:19f0:7001:13c9:5400:04ff:fe41:5e06]

Caddy received:
- Host (Server IP): [2001:19f0:7001:13c9:5400:4ff:fe41:5e06]
- Remote (Client IP): 127.0.0.1
- Forwarded From (via a reverse-proxy): 2001:19f0:7001:13c9:5400:4ff:fe41:5e06
  • In the above output Remote (Client IP) field is 127.0.0.1 due to running on the same Caddy instance, if you had another container to proxy to instead, then you would see something like this for the reverse-proxy containers IP: 172.18.0.3.
  • The last output is what you're wanting to achieve as it aligns with network_mode: host and Caddy running locally without docker involved. The only difference in parity is loopback IP isn't forwarded and instead results in the docker gateway IP.
  • If you query port 3000 directly instead, you'll get the value shown as forwarded IP as the remote IP instead as expected :)

Client requests from remote server

Regardless of userland-proxy setting, you'll find the expected Client IP is forwarded from actual remote clients too (again thanks to ip6tables in daemon.json):

# Another server from the same provider making remote requests:
$ curl 45.77.178.29

Caddy received:
- Host (Server IP): 45.77.178.29
- Remote (Client IP): 172.18.0.3
- Forwarded From (via a reverse-proxy): 167.179.89.159

$ curl http://[2001:19f0:7001:13c9:5400:04ff:fe41:5e06]

Caddy received:
- Host (Server IP): [2001:19f0:7001:13c9:5400:4ff:fe41:5e06]
- Remote (Client IP): 127.0.0.1
- Forwarded From (via a reverse-proxy): 2001:19f0:7001:1811:5400:4ff:fe42:cee8

@arazilsongweaver
Copy link

This issue is likely due to Moby Issue #44408 - "Original ip6 is not passed to containers".

@polarathene
Copy link

@arazilsongweaver no, you need to enable ip6tables as explained in my comment above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs info 📭 Requires more information
Projects
None yet
Development

No branches or pull requests

4 participants