Project

General

Profile

Issue #3355

Strongswan VTI tunnel traffic with Amazon VPC

Added by Edvinas Kaikaris 5 months ago. Updated 5 months ago.

Status:
Feedback
Priority:
Normal
Assignee:
-
Category:
configuration
Affected version:
5.7.2
Resolution:

Description

Hello, since now I always was having GRE route-based strongswan tunnels, and everything worked fine. Now, as i need to connect to Amazon VPC, i must use IPsec Based tunnels aka VTI.

And problems start to rise. From strongswan side i managed to establish an fully IPSec connection (amazon AWS also says that IPSEC is UP):

strongswan statusall Tunnel1

Status of IKE charon daemon (strongSwan 5.7.2, Linux 3.10.0-1062.12.1.el7.x86_64, x86_64):
Connections:
     Tunnel1:  31.157.3.161...52.220.221.17  IKEv1, dpddelay=10s
     Tunnel1:   local:  [31.157.3.161] uses pre-shared key authentication
     Tunnel1:   remote: [52.220.221.17] uses pre-shared key authentication
     Tunnel1:   child:  31.157.3.161/32 === 52.220.221.17/32 TUNNEL, dpdaction=restart
Security Associations (2 up, 0 connecting):
     Tunnel1[1]: ESTABLISHED 20 minutes ago, 31.157.3.161[31.157.3.161]...52.220.221.17[52.220.221.17]
     Tunnel1[1]: IKEv1 SPIs: 6a20df30e559d036_i* 7334d7e3ac06481a_r, rekeying in 7 hours
     Tunnel1[1]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048_256
     Tunnel1{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c712357c_i 99183878_o
     Tunnel1{1}:  AES_CBC_128/HMAC_SHA2_256_128/MODP_2048_256, 0 bytes_i (0 pkts, 52s ago), 0 bytes_o, rekeying in 27 minutes
     Tunnel1{1}:   31.157.3.161/32 === 52.220.221.17/32

ipsec.conf

conn Tunnel1
 auto=start
 left=31.157.3.161
 leftid=31.157.3.161
 right=52.220.221.17
 type=tunnel
 authby=psk
 ikelifetime=28800s
 lifetime=3600s
 ike=aes128-sha1-modp1024
 esp=aes128-sha1-modp1024
 keyexchange=ikev1
 leftsubnet=31.157.3.161/32
 rightsubnet=52.220.221.17/32
 dpddelay=10s
 dpdtimeout=30s
 dpdaction=restart
 rekey=yes
 reauth=no
 dpdaction=restart
 closeaction=restart
 compress=no
 mobike=no
vti1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1436 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 31.157.3.161 peer 52.220.221.17
    inet 169.254.13.210 peer 169.254.13.209/30 scope global vti1
       valid_lft forever preferred_lft forever

sysctl -w net.ipv4.conf.vti1.disable_xfrm=1
sysctl -w net.ipv4.conf.vti1.disable_policy=1

But when i'm trying to ping other side of the tunnel:

i got

prod [root@dc network-scripts]# ping 169.254.13.209 
PING 169.254.13.209 (169.254.13.209) 56(84) bytes of data.
From 169.254.13.210 icmp_seq=1 Destination Host Unreachable

I'm running Centos System, and it's strange that almost all results in google displays VTI tunnels which are policy based with mark options. As i understand MARK keys is only needed in policy based strongswan configuration ? What could be the further tshooting stepts to create and VTI Route Based Tunnel with Amazon VPC ?
Thanks

ping_bad.PNG (9.31 KB) ping_bad.PNG Edvinas Kaikaris, 09.03.2020 09:48

History

#1 Updated by Tobias Brunner 5 months ago

  • Category set to configuration
  • Status changed from New to Feedback

As i understand MARK keys is only needed in policy based strongswan configuration ?

Not at all, see RouteBasedVPN.

#2 Updated by Edvinas Kaikaris 5 months ago

Tobias Brunner wrote:

As i understand MARK keys is only needed in policy based strongswan configuration ?

Not at all, see RouteBasedVPN.

hello, managed to make a BGP connection between Linux Server and VPC. But when im trying to ping some virtual instance in that Amazon VPC, seems that Strongswan server doesnt encapsulate the traffic:

prod [root@dcvpnl001prpitx /]# ping 169.254.13.209
PING 169.254.13.209 (169.254.13.209) 56(84) bytes of data.
64 bytes from 169.254.13.209: icmp_seq=1 ttl=254 time=188 ms

The host is that VPC:

prod [root@dcvpnl001prpitx /]# ping 10.64.36.246
PING 10.64.36.246 (10.64.36.246) 56(84) bytes of data.
--- 10.64.36.246 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

prod [root@dcvpnl001prpitx /]# ip route get 10.64.36.246
10.64.36.246 via 169.254.13.209 dev vti1 src 169.254.13.210 
    cache

Seems that Strongswan even dont try to encapsulate the packets (i see it here when pinging a node in the Amazon VPC)

strongswan statusall | grep pkts    

My config is as follows:

Ipsec.conf

conn Tunnel1
 auto=start
 left=31.157.3.161
 leftid=31.157.3.161
 right=52.220.221.17
 rightid=52.220.221.17
 type=tunnel
 authby=psk
 ikelifetime=28800s
 lifetime=3600s
 ike=aes128-sha1-modp1024!
 esp=aes128-sha1-modp1024!
 keyexchange=ikev1
 dpddelay=10s
 dpdtimeout=30s
 dpdaction=restart
 rekey=yes
 reauth=no
 dpdaction=restart
 closeaction=restart
 compress=no
 mobike=no
 leftupdown=/tmp/vti.sh
 installpolicy=yes
 mark=100
 aggressive=no
 rightsubnet=0.0.0.0/0
 leftsubnet=0.0.0.0/0

vti.sh

 IP=$(which ip)
IPTABLES=$(which iptables)
PLUTO_MARK_OUT_ARR=(${PLUTO_MARK_OUT//// })
PLUTO_MARK_IN_ARR=(${PLUTO_MARK_IN//// })
VTI_INTERFACE=vti1
VTI_LOCALADDR=169.254.13.210/30
VTI_REMOTEADDR=169.254.13.209/30

case "${PLUTO_VERB}" in
    up-client)
        echo "DOING" >> /tmp/shit2.txt
        #$IP tunnel add ${VTI_INTERFACE} mode vti local ${PLUTO_ME} remote ${PLUTO_PEER} okey ${PLUTO_MARK_OUT_ARR[0]} ikey ${PLUTO_MARK_IN_ARR[0]}
        $IP link add ${VTI_INTERFACE} type vti local ${PLUTO_ME} remote ${PLUTO_PEER} okey ${PLUTO_MARK_OUT_ARR[0]} ikey ${PLUTO_MARK_IN_ARR[0]}
        sysctl -w net.ipv4.conf.${VTI_INTERFACE}.disable_policy=1
        sysctl -w net.ipv4.conf.${VTI_INTERFACE}.rp_filter=2 || sysctl -w net.ipv4.conf.${VTI_INTERFACE}.rp_filter=0
        $IP addr add ${VTI_LOCALADDR} remote ${VTI_REMOTEADDR} dev ${VTI_INTERFACE}
        $IP link set ${VTI_INTERFACE} up mtu 1436
        $IPTABLES -t mangle -I FORWARD -o ${VTI_INTERFACE} -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
        $IPTABLES -t mangle -I INPUT -p esp -s ${PLUTO_PEER} -d ${PLUTO_ME} -j MARK --set-xmark ${PLUTO_MARK_IN}
        $IP route flush table 220
        #/etc/init.d/bgpd reload || /etc/init.d/quagga force-reload bgpd
        ;;
    down-client)
        #$IP tunnel del ${VTI_INTERFACE}
        $IP link del ${VTI_INTERFACE}
$IPTABLES -t mangle -D FORWARD -o ${VTI_INTERFACE} -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
        $IPTABLES -t mangle -D INPUT -p esp -s ${PLUTO_PEER} -d ${PLUTO_ME} -j MARK --set-xmark ${PLUTO_MARK_IN}
        ;;
esac

# Enable IPv4 forwarding
sysctl -w net.ipv4.ip_forward=1
#sysctl -w net.ipv4.conf.vti1.disable_xfrm=1
#sysctl -w net.ipv4.conf.vti1.disable_policy=1

prod [root@dcvpnl001prpitx /]# ip xfrm policy
src 0.0.0.0/0 dst 0.0.0.0/0 
        dir out priority 399999 ptype main 
        mark 0x64/0xffffffff
        tmpl src 31.157.3.161 dst 52.220.221.17
                proto esp spi 0x7842823f reqid 1 mode tunnel
src 0.0.0.0/0 dst 0.0.0.0/0 
        dir fwd priority 399999 ptype main 
        mark 0x64/0xffffffff
        tmpl src 52.220.221.17 dst 31.157.3.161
                proto esp reqid 1 mode tunnel
src 0.0.0.0/0 dst 0.0.0.0/0 
        dir in priority 399999 ptype main 
        mark 0x64/0xffffffff
        tmpl src 52.220.221.17 dst 31.157.3.161
                proto esp reqid 1 mode tunnel
src 0.0.0.0/0 dst 0.0.0.0/0 
        socket in priority 0 ptype main 
src 0.0.0.0/0 dst 0.0.0.0/0 
        socket out priority 0 ptype main 
src 0.0.0.0/0 dst 0.0.0.0/0 
        socket in priority 0 ptype main 
src 0.0.0.0/0 dst 0.0.0.0/0 
        socket out priority 0 ptype main 
src ::/0 dst ::/0 
        socket in priority 0 ptype main 
src ::/0 dst ::/0 
        socket out priority 0 ptype main 
src ::/0 dst ::/0 
        socket in priority 0 ptype main 
src ::/0 dst ::/0 
        socket out priority 0 ptype main
prod [root@dcvpnl001prpitx /]# ip xfrm state
src 31.157.3.161 dst 52.220.221.17
        proto esp spi 0x7842823f reqid 1 mode tunnel
        replay-window 0 flag af-unspec
        mark 0x64/0xffffffff
        auth-trunc hmac(sha1) 0x1f6b230d938542bd7b80b1481b06783fab1f4881 96
        enc cbc(aes) 0x2067609a189f8e1467c2e8cac62744ee
        encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
        anti-replay context: seq 0x0, oseq 0x172, bitmap 0x00000000
src 52.220.221.17 dst 31.157.3.161
        proto esp spi 0xc0f82071 reqid 1 mode tunnel
        replay-window 32 flag af-unspec
        auth-trunc hmac(sha1) 0xb2805380ac738d9b85fb15da6c9090f24225000e 96
        enc cbc(aes) 0x0e0b06120c6207fc6c342a1952814a51
        encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
        anti-replay context: seq 0xea, oseq 0x0, bitmap 0xffffffff

I think it could be something with traffic selecttos. Any recommendations ? Thanks

#3 Updated by Edvinas Kaikaris 5 months ago

Edvinas Kaikaris wrote:

The only problem left is that i can'toriginate traffic from Strongswan BOX with any source IP but vti1 subnet 169.254.13.208/30. It even forwards the packets all needed packets from other boxes to AWS. Please help :)

#4 Updated by Edvinas Kaikaris 5 months ago

Edvinas Kaikaris wrote:

Edvinas Kaikaris wrote:

The only problem left is that i can'toriginate traffic from Strongswan BOX with any source IP but vti1 subnet 169.254.13.208/30. It even forwards the packets all needed packets from other boxes to AWS. Please help :)

UPD:

What i noticed more:

That Linux sends duplicated packets: one through vti1 interface and other one via other int (even the the routing table says sends it throught vti1)

image.png

ip route get 10.64.36.246
10.64.36.246 via 169.254.13.209 dev vti1 src 169.254.13.210
cache

dcvpnl001prpitx# sho ip route 10.64.36.246
Routing entry for 10.64.32.0/19
Known via "bgp", distance 20, metric 100, best
Last update 2d15h11m ago * 169.254.13.209, via vti1

Routing entry for 10.64.32.0/19
Known via "ospf1", distance 110, metric 50, tag 100
Last update 2d20h47m ago
10.254.1.182, via p2p1.401
10.254.1.180, via p2p2.400

Seems like the one which goes through vti1 is rejected. (no response found) could you elaborate why this behaviour could be ?

Thanks

Also available in: Atom PDF