Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Private network interface is not setup #64

Open
tmoreira2020 opened this issue Jul 10, 2016 · 6 comments
Open

Private network interface is not setup #64

tmoreira2020 opened this issue Jul 10, 2016 · 6 comments

Comments

@tmoreira2020
Copy link

hi there, I'm trying to create two machines in the same datacenter and leverage the private networking between them but seems that the plugins only allocates the private ip, it doesn't create the interface on the linode being created, am I correct?
Do the todo comment here https://github.com/displague/vagrant-linode/blob/master/lib/vagrant-linode/actions/create.rb#L195 states that?

@displague
Copy link
Owner

Correct. We should probably make network helper a default enabled option to avoid this problem.

@displague
Copy link
Owner

displague commented Jul 12, 2016

You should get this for free if you enable it account wide: https://www.linode.com/docs/platform/network-helper#modify-global-network-helper-settings

@tmoreira2020
Copy link
Author

Ok. I have tried to use with the network helper enabled but once the vagrant shell provisioning tries to download a rpm package it fails with "Could not resolve host: mirrors.linode.com; Unknown error" the full error message is at https://gist.github.com/tmoreira2020/30b7eda99b455008efb741ad3a266240. The same code works perfectly fine in Digital Ocean and AWS.

@displague
Copy link
Owner

displague commented Jul 20, 2016

Sorry for the late response, I missed this somehow.

That shouldn't happen. Can you provide the output of these commands (from the created Linode):

cat /etc/resolv.conf
ip a
ip r

Thanks.

@tmoreira2020
Copy link
Author

There you go, thanks

cat /etc/resolv.conf

[root@green-www ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search thiagomoreira.com.br


# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com

ip a

[root@green-www ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 5a:bc:de:3f:4c:bd brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether f2:3c:91:79:b8:5b brd ff:ff:ff:ff:ff:ff
    inet 45.33.73.26/24 brd 45.33.73.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.205.126/17 brd 192.168.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2600:3c03::f03c:91ff:fe79:b85b/64 scope global dynamic
       valid_lft 279sec preferred_lft 39sec
    inet6 fe80::f03c:91ff:fe79:b85b/64 scope link
       valid_lft forever preferred_lft forever
4: teql0: <NOARP> mtu 1500 qdisc noop state DOWN qlen 100
    link/void
5: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
6: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
7: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: ip_vti0@NONE: <NOARP> mtu 1428 qdisc noop state DOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
9: ip6_vti0@NONE: <NOARP> mtu 1500 qdisc noop state DOWN qlen 1
    link/tunnel6 :: brd ::
10: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1
    link/sit 0.0.0.0 brd 0.0.0.0
11: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1
    link/tunnel6 :: brd ::
12: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN qlen 1
    link/[823] 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00

ip r

[root@green-www ~]# ip r
default via 45.33.73.1 dev eth0
45.33.73.0/24 dev eth0  proto kernel  scope link  src 45.33.73.26
169.254.0.0/16 dev eth0  scope link  metric 1003
192.168.128.0/17 dev eth0  proto kernel  scope link  src 192.168.205.126

@displague
Copy link
Owner

The only way I was able to reproduce something like that was by running systemctl restart NetworkManager which clobbered the resolv.conf Linode's Network Helper dropped in place.

Is it possible that your Puppet config is changing things up? Can you try bringing it up without the provisioning and see if your resolv.conf and networking looks find under simpler conditions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants