Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network Policies in Kubernetes 1.9 don't work #184

Open
romeotheriault opened this issue Apr 22, 2018 · 12 comments
Open

Network Policies in Kubernetes 1.9 don't work #184

romeotheriault opened this issue Apr 22, 2018 · 12 comments

Comments

@romeotheriault
Copy link

Hi, using k8s 1.9 with kops and romana v2.0.2. I'm trying to apply k8s network policies but they seem to have no effect. I see that the romana listener is picking them up and they are creating romana policies, but the rules are having no effect. Do k8s network policies work with romana 2.0.2? (I see in the romana 2.1 feature list that supporting new style k8s network policies is a upcoming feature.)

@chrismarino
Copy link
Contributor

Hi @romeotheriault, Romana is a bit behind with the Network Policy API support. There was a change in the default behavior from k8s v1.7 to v1.8 (and v1.9). Romana v2.0.2 still only supports the v1.7 style API. Here is an example of a network policy that uses the old API. https://github.com/romana/romana/blob/master/test/kubernetes-cluster/frontend-to-backend.yml

Romana v2.1 is not quite ready. Been working on other things lately.

@romeotheriault
Copy link
Author

Thanks for the followup @chrismarino .

Are the 'romana.io/segment' labels some special syntax that needs to be used?

e.g.
spec:
podSelector:
matchLabels:
romana.io/segment: backend

I tried with using my own labels, e.g.
spec:
podSelector:
matchLabels:
app: webserver

and it didn't seem to take.

@chrismarino
Copy link
Contributor

@romeotheriault yes, v2.0.2 does not support free form lables defined in the spec. The default lables are 'romanaTenant' and 'romana.io/segment' and are defined at install here: https://github.com/romana/romana/blob/b383faf2884a9f3bb56f090161ba20732b0c02eb/romana-install/group_vars/kube_nodes

So, the default install allows only two labels: 'romana.io/segment' and 'romanaTenant'

Haven't run across anyone using more than two lables, yet, but can see how that would be useful, for sure. Multiple free-form lable are part of the next release, along with the latest API support.

Not sure how to change the label names, @cgilmour would have more details on what to do.

@chrismarino
Copy link
Contributor

@romeotheriault FWIW, these hard coded labels are a remnant of Romana v1.0 that had an explicit tenancy model. Romana v2 expanded that to be more flexible, but still some lingering evidence of the old v1.0 model.

@romeotheriault
Copy link
Author

romeotheriault commented Apr 23, 2018

Thank you. That helps, and I think I'm getting really close but for some reason none of the rules I apply are working. As a first test I'm simply trying to have a rule between two pods in the same namespace that only allows one port to be contacted from the other pod. I applied the 'romana.io/segment' label to the containers I created, and romanaTenant appears to be getting set to the namespace of the pod.

But this network policy is still allowing all communications from the dbm segment (pod) to the gw segment (pod). It's not getting restricted to only contacting port 40000 on the gw.

kind: NetworkPolicy
metadata:
 name: pol1
 namespace: sprouts
spec:
 podSelector:
  matchLabels:
   romana.io/segment: gw
   romanaTenant: sprouts
 ingress:
 - from:
   - podSelector:
      matchLabels:
       romana.io/segment: dbm
       romanaTenant: sprouts
   ports:
    - protocol: TCP
      port: 40000

Here are the pods definitions:

---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: am1edbm
  namespace: sprouts
spec:
  serviceName: ""
  replicas: 1
  selector:
    matchLabels:
      app: dbm
  template:
    metadata:
      labels:
        app: dbm
        romana.io/segment: dbm
    spec:
      containers:
      - name: dbm
        image: 4730230443333298.dkr.ecr.us-east-1.amazonaws.com/k8sdbm:latest


---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: am1edbm
  namespace: sprouts
spec:
  serviceName: ""
  replicas: 1
  selector:
    matchLabels:
      app: gw
  template:
    metadata:
      labels:
        app: gw
        romana.io/segment: gw
    spec:
      containers:
      - name: gw
        image: 473033323044298.dkr.ecr.us-east-1.amazonaws.com/k8sgw:latest


And the romana block list output:

root@ip-10-33-16-10:~# romana block list
Block List
Block CIDR      Block Host                              RevisionBlock Tenant    Block Segment   Block Allocated IP Count
100.97.0.0/29                                           6                                       0
100.97.0.8/29   ip-10-33-16-99.aws.1010data.cloud       9       kube-system     default         3
100.97.0.16/29                                          8                                       0
100.98.0.0/29   ip-10-33-16-10.aws.1010data.cloud       21      sprouts         gw              1
100.98.0.8/29   ip-10-33-16-10.aws.1010data.cloud       5       cmh01           default         1
100.98.0.16/29  ip-10-33-16-10.aws.1010data.cloud       8       default         webserver       2
100.98.0.24/29  ip-10-33-16-10.aws.1010data.cloud       3       sprouts         dbm             1
100.98.0.32/29  ip-10-33-16-10.aws.1010data.cloud       1       sprouts         compute         1```

Am I missing something obvious?

@romeotheriault
Copy link
Author

Bit more info:

root@ip-10-33-16-10:~# romana policy list
Policy List
Policy Id                                               Direction       Applied to      No of Peers     No of Rules     Description
AllowAllPods2Talk_51083c96-45c0-11e8-ba75-0e2200365272_ ingress         1               1               1
AllowAllPods2Talk_517efd39-45c0-11e8-ba75-0e2200365272_ ingress         1               1               1
AllowAllPods2Talk_536e03a5-45c0-11e8-ba75-0e2200365272_ ingress         1               1               1
AllowAllPods2Talk_a35fee4a-45c3-11e8-ba75-0e2200365272_ ingress         1               1               1
AllowAllPods2Talk_a80c863a-45d4-11e8-ba75-0e2200365272_ ingress         1               1               1
kube.sprouts.pol1.dfdd26b6-468b-11e8-ba75-0e2200365272  ingress         1               1               1


root@ip-10-33-16-10:~# romana policy show kube.sprouts.pol1.dfdd26b6-468b-11e8-ba75-0e2200365272
Policy Details
Policy Id:      kube.sprouts.pol1.dfdd26b6-468b-11e8-ba75-0e2200365272
Direction:      ingress
Description:
Applied To:
Peer:
Cidr:
Destination:
TenantID:       sprouts
SegmentID:      gw
Peers:
Peer:
Cidr:
Destination:
TenantID:       sprouts
SegmentID:      dbm
Rules:
Protocol:       tcp
IsStateful:     false
Ports:          [40000]
PortRanges:     []
IcmpType:       0
IcmpCode:       0```

@cgilmour
Copy link
Collaborator

Hi @romeotheriault,

The latest version of Romana still uses the v1.7 approach to policies, which required an annotation on the namespace:
kubectl annotate --overwrite namespace sprouts 'net.beta.kubernetes.io/networkpolicy={"ingress": {"isolation": "DefaultDeny"}}'

This will trigger deletion of the AllowAllPods2Talk_xxxxxx policy for that namespace, and only traffic from your additional policies will be permitted.

Can you try that and report back?

Thanks
Caleb

@romeotheriault
Copy link
Author

Hi @cgilmour, Thanks for the tip. I gave that a shot and it did indeed remove the AllowAllPods2Talk_xxxx policy for that namespace. But even after that is gone all of the pods within that namespace (and pods in other namespaces) can talk to the pods in that namespace.

@chrismarino
Copy link
Contributor

chrismarino commented Apr 23, 2018

@romeotheriault send an email to info@romana.io to get in invite to Romana's Slack if you want.

@cgilmour
Copy link
Collaborator

From our discussion on slack, there were some rules that effectively accept traffic before Romana's policy rules have a chance to make decisions.

(from iptables-save)

-A FORWARD -p tcp -j ACCEPT
-A FORWARD -p udp -j ACCEPT
-A FORWARD -p icmp -j ACCEPT

These accept the traffic on the FORWARD chain before reaching the rules that Romana manages.

This seems to be related to the change from kubernetes/kops#3977, and caused an issue for other network policy implementations (eg: kubernetes/kops#4345).

I'll see what options we have for handling this better.

@kuijpersj
Copy link

@romeotheriault Did you get the NetworkPolicy's working? I'm currently trying to accomplish the same but somehow all communication between the pods keeps working (which is bad if you want to block communication).

my iptables-save on the machines does not show the -A forwared ... rules show in the comment above.

@swap357
Copy link

swap357 commented Oct 15, 2019

Can you provide the modified yaml with deployments and Network policies working for K8s>1.6 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants