Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_ipv6_addr does not actually check if addresses are dynamic #877

Open
rhxto opened this issue Feb 29, 2024 · 2 comments · May be fixed by #878
Open

get_ipv6_addr does not actually check if addresses are dynamic #877

rhxto opened this issue Feb 29, 2024 · 2 comments · May be fixed by #878

Comments

@rhxto
Copy link

rhxto commented Feb 29, 2024

I think this if condition is not correct.
I'm using the ceph-osd charm which fails the mon-relation-changed hook saying that there is no valid address.
Here's the logs:

2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60 Traceback (most recent call last):
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 978, in <module>
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     hooks.execute(sys.argv)
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/core/hookenv.py", line 963, in execute
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     self._hooks[hook_name]()
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 730, in mon_relation
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     emit_cephconf()
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 506, in emit_cephconf
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     context = get_ceph_context(upgrading)
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 470, in get_ceph_context
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     dynamic_ipv6_address = get_ipv6_addr()[0]
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/contrib/network/ip.py", line 347, in iface_sniffer
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     return f(*args, **kwargs)
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60   File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/contrib/network/ip.py", line 415, in get_ipv6_addr
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60     raise Exception("Interface '%s' does not have a scope global "
2024-02-29 14:47:45 WARNING unit.ceph-osd/0.mon-relation-changed logger.go:60 Exception: Interface 'br-ens2' does not have a scope global non-temporary ipv6 address.
2024-02-29 14:47:45 ERROR juju.worker.uniter.operation runhook.go:180 hook "mon-relation-changed" (via explicit, bespoke hook script) failed: exit status 1
2024-02-29 14:47:45 INFO juju.worker.uniter resolver.go:161 awaiting error resolution for "relation-changed" hook

The interface the logs refer to, has a global dynamic, a global and a link-local address.
get_ipv6_addr was correctly excluding the link-local one; the condition dynamic_only is weirdly defaulted to True (which I agree with #554 should not be) and the function should've returned the global dynamic one.
The original conditions allows the address only if either we're getting dynamic and non-dynamic addresses, or if we're only getting dynamic ones they must have the eui 64 mac at the end which should only be in some SLAAC assigned ones, while it breaks DHCPv6.

@rhxto rhxto linked a pull request Feb 29, 2024 that will close this issue
@rhxto
Copy link
Author

rhxto commented Mar 8, 2024

I'm running into this issue with the openstack-loadbalancer charm as well.
Copied the patched ip.py file and that charms works as well.

unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install Traceback (most recent call last):
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/install.real", line 767, in <module>
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     hooks.execute(sys.argv)
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/charmhelpers/core/hookenv.py", line 963, in execute
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     self._hooks[hook_name]()
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/install.real", line 160, in install
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     if emit_corosync_conf():
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/utils.py", line 362, in emit_corosync_conf
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     corosync_conf_context = get_corosync_conf()
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/utils.py", line 258, in get_corosync_conf
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     'ha_nodes': get_ha_nodes(),
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/utils.py", line 476, in get_ha_nodes
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     addr = get_ipv6_addr()
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/hooks/utils.py", line 454, in get_ipv6_addr
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     return utils.get_ipv6_addr(exc_list=excludes)[0]
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/charmhelpers/contrib/network/ip.py", line 347, in iface_sniffer
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     return f(*args, **kwargs)
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install   File "/var/lib/juju/agents/unit-openstack-loadbalancer-hacluster-0/charm/charmhelpers/contrib/network/ip.py", line 415, in get_ipv6_addr
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install     raise Exception("Interface '%s' does not have a scope global "
unit-openstack-loadbalancer-hacluster-0: 13:06:37 WARNING unit.openstack-loadbalancer-hacluster/0.install Exception: Interface 'eth1' does not have a scope global non-temporary ipv6 address.

@rhxto
Copy link
Author

rhxto commented Mar 13, 2024

mysql-innodb-cluster is also affected.
Can anyone have a look at the pull request?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant