Skip to content

simonescevaroli/connecting-SDN-slices

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project for Softwarized and Virtualized Mobile Network (Università di Trento, 2021-2022)

Connecting SDN Slices

TABLE OF CONTENTS

Components used
1st topology

2nd topology

COMPONENTS USED

Open vSwitch
RYU controller: defined in ComNetsEmu
Host and switches: defined in ComNetsEmu
OpenFlow1.0 (both RYU controllers and OvSwitches need to be set up working with this version)

1st topology

STATEMENT (GENERAL IDEA)

• Here we have a cycle. Every host can communicate with the others, but when a flood starts we enter in an infinite loop


• A topology slicing avoids infinite loops by separating the cycle into two trees, controlled by two controllers; the two slices cannot communicate


• We interconnected the two slices by adding a common root (s9); a third controller manages inter-slices communication


• Additionally, the provider doesn’t want a slice to send UDP packets to the other; s9 sends the inter-slices UDP packets to a server that filters (and then drops) the packets

TOPOLOGY

To see further details see 1st_topology

We realized three different slices (topology slicing):

  • slice1: a controller allows the communication between: h1, h2, h5, h6
  • slice2: a controller allows the communication between: h3, h4, h7, h8
  • connecting_slice: a controller allows non-UDP packet inter-slices transmission, and filters UDP packet to server1 and server2 (if sent by slice1 or slice2 respectively)

    Note: server1 and server2 are configured not to send any packet. They can only receive UDP packets that are filtered by s9

DEMO

Set up the environment

Start up the VM
vagrant up comnetsemu

Log into the VM
vagrant ssh comnetsemu

Set up the topology in mininet

Flush any previous configuration
$ sudo mn -c

Build the topology
$ sudo python3 network.py

Set up the controllers

In a new terminal, run this script to start all the controllers in a single shell
./runcontrollers.py

Create a new terminal for future flow table test

Test reachability

By running mininet> pingall we obtain the following result:

Note: ping and pingall send ICMP packets.
Note: server1 and server2 never send and receive ICMP packets

Perform ping between host 1 and host 2
mininet> h1 ping -c3 h2

Perform ping between host 3 and host 4
mininet> h3 ping -c3 h4


Intra-slice communication works correctly.

Show s4 flow table
$ sudo ovs-ofctl dump-flows s4

Host 1 can send TCP packets to Host 4
mininet> h4 iperf -s &
mininet> h1 iperf -c 10.0.0.4 -t 5 -i 1

Host 1 cannot send UDP packets to Host 3
mininet> h3 iperf -s -u &
mininet> h1 iperf -c 10.0.0.3 -u -t 5 -i 1

Show s9 flow table (path depends on protocol)
$ sudo ovs-ofctl dump-flows s9


Close and clean up everything

It’s better to flush the topology with sudo mn -c and to stop the VM with vagrant halt comnetsemu

2nd topology

STATEMENT (GENERAL IDEA)

• Here we have two separate networks.


• We performed a topology slicing in each network. We also wanted to connect two slices with a third one.
Note: 2 slices remain separated, and use their own logic (see the image below).

TOPOLOGY


To see further details see 2nd_topology

We realized five different slices:

  • control_office: a controller allows the communication between: h1, h2
  • office1: a controller allows the communication between: h3, h4. Each packet follows a specific path
  • office2: a controller allows the communication between: h5, h6, h7. The path depends on the packet protocol (service slicing)
  • computer_room: a controller allows the communication between: h8, h9, h10, h11, h12, h13, h14
  • connecting_slice: a controller allows the communication between the slices control_office and computer_room

    Note: office1 slice contains a loop; this doesn't cause any problem since each packet follows a specific path

DEMO

Set up the topology in mininet

Flush any previous configuration
$ sudo mn -c

Build the topology
$ sudo python3 network.py

Set up the controllers

In a new terminal, run this script to start all the controllers in a single shell
./runcontrollers.py

Create a new terminal for future flow table test

Test reachability

By running mininet> pingall we obtain the following result:

Perform ping between host 1 and host 2
mininet> h1 ping -c3 h2

Perform ping between host 3 and host 4
mininet> h3 ping -c3 h4

Intra-slice communication works correctly.

Show s3 flow table
$ sudo ovs-ofctl dump-flows s3

Show s4 flow table
$ sudo ovs-ofctl dump-flows s4

Perform ping between host 2 and host 11
mininet> h2 ping -c3 h11

Also inter-slice communication works correctly (passing through connecting_slice).

Now let's test the reachability in the office2, depending on the packet type

Perform ping between host 5 and host 7 (ICMP packets)
mininet> h5 ping -c3 h7

Host 5 send UDP packets to Host 7
mininet> h7 iperf -s -u -b 10M &
mininet> h5 iperf -c 10.0.0.7 -u -b 10M -t 10 -i 1

Host 5 send TCP packets to Host 7
mininet> h7 iperf -s -b 7M &
mininet> h5 iperf -c 10.0.0.7 -b 7M -t 10 -i 1

Show s8 flow table
$ sudo ovs-ofctl dump-flows s8

This switch also saves the packet type so that a packet can choose the correct entry depending on that

Show s10 flow table
$ sudo ovs-ofctl dump-flows s10

Close and clean up everything

It’s better to flush the topology with sudo mn -c and to stop the VM with vagrant halt comnetsemu

Releases

No releases published

Packages

No packages published