Project

General

Profile

Introduction to strongSwan: Forwarding and Split-Tunneling » History » Version 31

Version 30 (Tobias Brunner, 13.09.2019 10:58) → Version 31/32 (Noel Kuntze, 05.10.2019 14:59)

h1. Introduction to strongSwan: Forwarding and Split-Tunneling

{{>toc}}

In remote access situations clients will usually send all their traffic to the gateway. Below we explain how this traffic can be forwarded and properly routed back to the roadwarriors.

In some situations it might be more desirable to send only specific traffic via the gateway, for instance, to unburden it from forwarding web (or even worse, file sharing) traffic. Therefore, we also explain how to enable this so-called _split-tunneling_ for different clients.

h2. Forwarding Client Traffic

In order to forward traffic to hosts behind the gateway (or hosts on the Internet if split-tunneling is not used) the following
option has to be enabled on Linux gateways:

<pre>
sysctl net.ipv4.ip_forward=1
sysctl net.ipv6.conf.all.forwarding=1
</pre>

This can be added to @/etc/sysctl.conf@ to enable it permanently.

If the firewall on the gateway is rather restrictive, _leftfirewall=yes_ will automatically cause the default [[updown|updown script]] to add
rules that allow traffic to be forwarded.

In remote access situations clients will be assigned a [[VirtualIP|virtual IP address]] from a configured address pool. How a gateway can
assign these to clients is [[VirtualIP|already explained elsewhere]]. The important part is that in order to respond to requests from hosts
from that _virtual subnet_, the hosts to which the gateway forwards traffic to have to know that they must send packets for these
hosts to the VPN gateway.

Please note that there might be additional considerations when hosting on [[CloudPlatforms|cloud platforms]] (e.g. src/dst address checks).

h3. Hosts on the LAN

For hosts on the LAN behind the gateway the following situations are possible:

* *The virtual IPs are from the subnet behind the gateway*: In this situation either the [[dhcpplugin|dhcp plugin]] is used or the
gateway assigns virtual IP addresses from a subnet of the whole LAN behind the gateway (distinct from the IP addresses
assigned via DHCP to other LAN hosts). If that is the case, the [[farpplugin|farp plugin]] must be used so that the hosts behind the
gateway may learn that they have to send response packets to the VPN gateway. For IPv6 something similar can be done
using NDP(Neighbour Discovery Protocol) proxying (see #1008).

* *The virtual IPs are from a distinct subnet / In site-to-site scenarios*: If the VPN gateway is the *default gateway* of
the accessed LAN nothing special has to be done. If it is not, either add a route to all hosts behind the gateway (manually
or e.g. via "DHCP option 121":https://tools.ietf.org/html/rfc3442) telling them that the subnet from which virtual IP addresses are assigned to roadwarriors
(or other subnets in site-to-site scenarios) can be reached through the VPN gateway, or configure a static route on the
actual default gateway, which directs traffic for the virtual subnet to the VPN gateway. It's also possible to NAT the virtual IPs
to the (internal) IP address of the gateway, so that requests from clients will look to LAN hosts as if they originated from the
gateway (see the next section for notes on setting up a NAT).
If the VPN gateway is not the default gateway of the LAN this might cause ICMP redirects to get returned to hosts if they send
traffic destined for the remote subnets to the VPN gateway, directing them to the default gateway of the LAN (which probably
doesn't work and otherwise might get that traffic out unencrypted). To avoid that, disable sending such ICMPs by setting
@net.ipv4.conf.all.send_redirects@ and @net.ipv4.conf.default.send_redirects@ to @0@ (if the latter is not set before
the interface comes up also set the option for the individual interface, i.e. @net.ipv4.conf.<iface>.send_redirects@).

h3. Hosts on the Internet

If split-tunneling is not used all client traffic will be sent through the IPsec tunnel. In this scenario _leftsubnet=0.0.0.0/0_ is
configured on the gateway and _rightsubnet=0.0.0.0/0_ on the client. Now, the gateway could simply ignore/drop traffic not
destined for subnets it doesn't want the clients to access. But that is probably not what most users expect. It is more likely
that they expect being able to continue to surf the web or read their emails while connected to the VPN.

The situation here is similar to the one for LAN hosts above. If the gateway would simply forward traffic from the virtual subnet
to hosts on the Internet these wouldn't be able respond (they would send their response to the virtual IP address).
What is required, therefore, are *NAT rules* so that hosts in the virtual subnet are mapped to at least one IP address of the VPN
gateway (which itself could be behind a NAT device too). For hosts on the Internet, traffic from the virtual subnet appears to
originate from the VPN gateway.

By way of example, let's assume the gateway assigns virtual IPs from the _10.0.3.0/24_ subnet to its roadwarrior clients. The
following _iptables_ rules will NAT traffic from that subnet to the gateway's _eth0_ interface (this works even for gateways that
have only one network interface).

<pre>
iptables -t nat -A POSTROUTING -s 10.0.3.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.3.0/24 -o eth0 -j MASQUERADE
</pre>

The first rule exempts traffic that matches an IPsec policy from the NAT rule. Additional subnets behind the gateway may be
listed after _-s_, like _-s 10.0.3.0/24,192.168.88.0/24_. The _-s_ option may also be omitted altogether to match all outbound traffic.

h2. General NAT problems

Local firewall stacks generally don't treat packets with a matching IPsec policy any different from unprotected packets. That means NAT rules also apply to traffic that is supposed to be tunneled.

This often leads to problems, because many hosts have @SNAT@ or @MASQUERADE@ rules set up, which change the source IP of the packets
making them not match the negotiated IPsec policies when IPsec processing of outgoing packets happens (In the graphic "here":https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg that's "xfrm lookup").
To fix this problem, packets with a matching IPsec policy should skip NAT rules in the @POSTROUTING@ chain of the @nat@ table.
This is achieved by inserting a rule that accepts packets with a matching IPsec policy before any NAT rule in the @POSTROUTING@ chain.

The following rule does that:
<pre>
iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT
</pre>

h2. Split-Tunneling

With split-tunneling the clients will only send traffic for specific destination subnets to the gateway. For both protocol versions
split-tunneling is easy to deploy if traffic selectors (TS) can freely be configured on both peers. In that case you'd simply use
specific values for the _rightsubnet_ and _leftsubnet_ options.

h3. Split-Tunneling with IKEv2

With IKEv2 split-tunneling is quite easy to use as the protocol inherently supports narrowing of the proposed traffic selectors.

For instance, if the client proposes _0.0.0.0/0_ as remote TS (_rightsubnet_), this can be narrowed on the gateway by configuring
_leftsubnet=<list of subnets>_. Likewise, the client may already propose a selective remote TS by configuring a list of subnets
with _rightsubnet_, which the gateway might simply accept (e.g. if it has _leftsubnet=0.0.0.0/0_ configured) or could again narrow
to a subset.

While the protocol supports split-tunneling, whether it can actually be used *depends on the client*. Most remote access clients
will propose _0.0.0.0/0_ as remote TS, so split-tunneling must be configured on the gateway. But whether this will actually result
in split-tunneling will depend on the client. All strongSwan based clients (Linux, NetworkManager, Android) support this kind of
narrowing, for Windows 7 clients the situation is as follows.

The *Windows 7 client* will always allow access to the host's LAN. So to access, for instance, a local printer nothing special
has to be done. Since the client always proposes _0.0.0.0/0_ as remote TS, the gateway is free to narrow it to a subset. But to
make split-tunneling actually work on the client the _Use default gateway on remote network_ option in the _Advanced TCP/IP_ settings of
the VPN connection has to be disabled. Also, because a classful route is installed the virtual IP address has to belong to the
remote subnet, otherwise, the _Disable class based route addition_ option has to be enabled and routes have to be installed
manually.

With Windows 8.1 (and in Windows Server 2012 R2) Microsoft introduced "PowerShell cmdlets":https://docs.microsoft.com/en-us/powershell/module/vpnclient/ to configure VPN connections.
These provide more options and also allow to configure split tunneling directly (_-SplitTunneling_ option).

*Windows 10* has split tunneling enabled by default, but with the same limitations seen since Windows 7, i.e. the virtual IP has to be
from the remote subnet, or routes have to be added manually, for instance, via "Add-VpnConnectionRoute":https://docs.microsoft.com/en-us/powershell/module/vpnclient/Add-VpnConnectionRoute?view=win10-ps PowerShell command.
To tunnel all traffic via VPN instead, split tunneling has to be disabled explicitly, either by enabling the _Use default gateway on remote network_
setting described above or by using the following PowerShell command: @Set-VpnConnection "<Connection Name>" -SplitTunneling 0@

h3. Split-Tunneling with IKEv1

IKEv1 does not provide narrowing of traffic selectors by default. That means that the traffic selector configuration usually has
to *match exactly* on both peers. To simplify things, the IKEv1 implementation in the [[charon]] daemon (available since [[5.0.0]])
does support *narrowing* of traffic selectors similar to how it is implemented for IKEv2. Unfortunately, this is not compatible
with many implementations by other vendors.

On the other hand, such clients may support the *Unity extensions* developed by Cisco. Since [[5.0.1]] the [[UnityPlugin|unity plugin]] provides
strongSwan gateways with a transparent way of assigning narrowed traffic selectors to clients that support these extensions (e.g.
racoon, as used in Apple products). For earlier releases the [[attrsql|attr-sql plugin]] provides the means to manually configure attributes
that enable split-tunneling for Unity-aware clients (since [[5.0.1]] such attributes can also be provided through the [[attrplugin|attr plugin]]).



h2. MTU/MSS issues

It is possible that you encounter MSS/MTU problems when tunneling traffic. This is caused by broken routers dropping
ICMP packets and thus breaking PMTUD(Path MTU Discovery). You can work around it by lowering the advertised MSS value of TCP with the @TCPMSS@
target in @iptables@.

Or, if you control the router in question, fixing PMTU may be advisable -- to do so you need to permit the appropriate ICMP
traffic (type 3, destination unreachable, code 4, fragmentation needed - though all of type 3 is usually allowed.)
In particular, one must pay attention to the source address of ICMP messages emitted by the VPN gateway, which will usually be
the primary IP address of the gateway's internal interface, *not* that of the endpoint experiencing the issue.

The value you set with the @TCPMSS@ target must accommodate for any other overhead introduced by the tunneling protocols
in use (for instance, UDP encapsulation of ESP).
Google the issue and read the man page of @iptables@ and @iptables-extensions@ if there are any questions about its usage.

The _charon.plugins.kernel-netlink.mss_ and _charon.plugins.kernel-netlink.mtu_ may be used, too, but the values set there apply
to the routes that @kernel-netlink@ installs and the impact of them onto the traffic and the behavior of the kernel is currently quite unclear.

Add the following iptables rules on the IKE responder to reduce the MSS (as noted above, the actual values depend on the overhead
imposed by the tunneling protocols and the MTU, so it might have to be lower than what's used in the example here):
<pre>
iptables -t mangle -A FORWARD -m policy --pol ipsec --dir in -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
iptables -t mangle -A FORWARD -m policy --pol ipsec --dir out -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
</pre>

Alternatively, you can add the same rules in @PREROUTING@/@POSTROUTING@ (also in the @mangle@ table).

Additionally, set @net.ipv4.ip_no_pmtu_disc@ on the server to @1@.

In newer kernels, the counter @XfrmOutStateModeError@ in @/proc/self/net/xfrm_stat@ is incremented if the kernel detects that a packet would be too large after encapsulation.