Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

T5873: ipsec remote access VPN: support VTI interfaces. #3221

Draft
wants to merge 1 commit into
base: current
Choose a base branch
from

Conversation

lucasec
Copy link
Contributor

@lucasec lucasec commented Apr 1, 2024

Change Summary

Route-based VPNs can be more convenient to configure and tie in nicely with existing routing protocols, zone-based firewalls, and other common network configurations. OpenVPN users are already quite familiar with this pattern. This PR extends the IPsec (IKEv2) Remote Access VPN to support "virtual tunnel interfaces" enabling similar usage patterns.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes)
  • Migration from an old Vyatta component to vyos-1x, please link to related PR inside obsoleted component
  • Other (please describe):

Related Task(s)

https://vyos.dev/T5873

Related PR(s)

Component(s) name

ipsec remote-access

Proposed changes

This PR includes two key changes to enable VTI-based remote access VPNs:

  1. The remote-access pool configuration block has been extended to accept a range block, as an alternative to the current CIDR prefix attribute. This allows defining a more granular range for assigning VPN client IPs, which is helpful if you want to reserve one or more IPs at the start of a CIDR block for the router itself on the VTI interface.
  2. The remote-access connection accepts a new bind attribute. This works identically to the peer <peer> vti bind attribute (for site-to-site peers you either define one or more tunnels, or a vti block—there is no equivalent of tunnel for remote-access connections hence the decision not to nest it under vti here). Once defined, all traffic to/from connected peers uses the specified VTI interface, as opposed to being routed by kernel policies. This change is enabled by the internal change in VyOS 1.4 that switched from the legacy vti interface type to the newer xfrm interface type, which happily multiple tunnels with different local/remote traffic selectors, e.g. for each connected VPN client.

How to test

Example configuration:

 interfaces {
     ethernet eth0 {
         ...
     }
     vti vti1 {
         address 10.23.58.1/24
         address fdcc:2200:a8ee:2358::1/64
         description "Client VPN"
         mtu 1436
     }
 }
 vpn {
     ipsec {
         esp-group ClientVPN-Client {
             lifetime 3600
             pfs enable
             proposal 1 {
                 encryption aes256gcm128
                 hash sha256
             }
         }
         ike-group ClientVPN-Client {
             key-exchange ikev2
             lifetime 7200
             proposal 1 {
                 dh-group 21
                 encryption aes256gcm128
                 hash sha256
             }
         }
         options {
             disable-route-autoinstall
         }
         remote-access {
             connection ClientVPN {
                 authentication {
                     client-mode x509
                     local-id <local id>
                     server-mode x509
                     x509 {
                         ca-certificate <ca cert name>
                         certificate <cert name>
                     }
                 }
                 bind vti1
                 dhcp-interface eth0
                 esp-group ClientVPN-Client
                 ike-group ClientVPN-Client
                 pool Client-Pool-v4
                 pool Client-Pool-v6
             }
             pool Client-Pool-v4 {
                 name-server 10.23.58.1
                 range {
                     start 10.23.58.2
                     stop 10.23.58.254
                 }
             }
             pool Client-Pool-v6 {
                 name-server fdcc:2200:a8ee:2358::1
                 range {
                     start fdcc:2200:a8ee:2358::2
                     stop fdcc:2200:a8ee:2358::ffff
                 }
             }
         }
     }
 }

Smoketest result

 INFO - Executing VyOS smoketests
DEBUG - vyos@vyos:~$ /usr/bin/vyos-smoketest
DEBUG - /usr/bin/vyos-smoketest
DEBUG - Running Testcase: /usr/libexec/vyos/tests/smoke/cli/test_vpn_ipsec.py
DEBUG - test_dhcp_fail_handling (__main__.TestVPNIPsec.test_dhcp_fail_handling) ... ok
DEBUG - test_dmvpn (__main__.TestVPNIPsec.test_dmvpn) ... ok
DEBUG - test_flex_vpn_vips (__main__.TestVPNIPsec.test_flex_vpn_vips) ... ok
DEBUG - test_remote_access (__main__.TestVPNIPsec.test_remote_access) ... ok
DEBUG - test_remote_access_dhcp_fail_handling (__main__.TestVPNIPsec.test_remote_access_dhcp_fail_handling) ... ok
DEBUG - test_remote_access_eap_tls (__main__.TestVPNIPsec.test_remote_access_eap_tls) ... ok
DEBUG - test_remote_access_pool_range (__main__.TestVPNIPsec.test_remote_access_pool_range) ... ok
DEBUG - test_remote_access_vti (__main__.TestVPNIPsec.test_remote_access_vti) ... ok
DEBUG - test_remote_access_x509 (__main__.TestVPNIPsec.test_remote_access_x509) ... ok
DEBUG - test_site_to_site (__main__.TestVPNIPsec.test_site_to_site) ... ok
DEBUG - test_site_to_site_vti (__main__.TestVPNIPsec.test_site_to_site_vti) ... ok
DEBUG - test_site_to_site_x509 (__main__.TestVPNIPsec.test_site_to_site_x509) ... ok

Checklist:

  • I have read the CONTRIBUTING document
  • I have linked this PR to one or more Phabricator Task(s)
  • I have run the components SMOKETESTS if applicable
  • My commit headlines contain a valid Task id
  • My change requires a change to the documentation
  • I have updated the documentation accordingly

@vyosbot vyosbot requested review from a team, dmbaturin, sarthurdev, zdc, jestabro, sever-sever and c-po and removed request for a team April 1, 2024 00:44
@lucasec lucasec marked this pull request as draft April 1, 2024 02:28
@lucasec
Copy link
Contributor Author

lucasec commented Apr 1, 2024

Changing this to a draft as it appears the up/down script may need some work for this to work properly. Seems to have been lost in the rebase.

@GurliGebis
Copy link
Contributor

@lucasec is the fix in #3302 in any way related to the issue you mention?

@lucasec
Copy link
Contributor Author

lucasec commented Apr 12, 2024

It’s not related to this PR. I also don’t think that fix is related to the instability I’ve been seeing in https://vyos.dev/T6177. I am planning to resume work on this shortly while I continue to debug that issue.

@GurliGebis
Copy link
Contributor

Sounds great, thank you 🙂

@c-po
Copy link
Member

c-po commented Apr 13, 2024

Very interesting approach. But why I need also pools if now ecerything cones from the VTI? Is it b/c of IP address assignment reasons (fake dhcp?)

@lucasec
Copy link
Contributor Author

lucasec commented Apr 15, 2024

Yeah, the IP addresses are still assigned to clients via the IKE protocol, so the pool configuration is needed to tell strongswan what range to create client IPs from.

Getting the up/down logic for the interface right is a little interesting. I would assume if a remote access is bound to the VTI, the VTI interface should be up all the time.

Cleanest implementation is probably to have set_admin_state (here:

def set_admin_state(self, state):
) check for dependencies in the ipsec config, then only no-op if the interface is unbound or bound to a single site-to-site config (so for remote access, the interface would come up immediately via the normal interface up/down code).

Btw the logic for T6085 for site-to-site configs (9eb018c) has some potentially unintended behavior that disabling/re-enabling a VTI interface in the config does not immediately take effect—the interface state will only update after the ipsec connection gets torn down or ipsec is restarted.

@sever-sever
Copy link
Member

Any updates?

@lucasec
Copy link
Contributor Author

lucasec commented May 14, 2024

I think I set this aside awaiting further feedback on the approach for the up/down script I shared in this comment: #3221 (comment).

Let me re-visit later this week and try to put together an implementation—that may be a faster way to move this forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
4 participants