Project

General

Profile

Issue #3268

Traffic disruption -- policy-based VPN to AWS VPN service

Added by John Floroiu about 1 year ago. Updated about 1 year ago.

Status:
Feedback
Priority:
Normal
Assignee:
-
Category:
configuration
Affected version:
5.6.2
Resolution:

Description

Hi!

I am setting up a policy based VPN between strongswan and the AWS VPN service. AWS offers two VPN terminations on its side:

Status of IKE charon daemon (strongSwan 5.6.2, Linux 4.15.0-1052-aws, x86_64):
  uptime: 3 hours, since Nov 12 12:37:18 2019
  malloc: sbrk 1839104, mmap 0, used 1014272, free 824832
  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 6
  loaded plugins: charon aesni aes rc2 sha2 sha1 md4 md5 mgf1 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl fips-prf gmp agent xcbc hmac gcm attr kernel-netlink resolve socket-default connmark stroke updown eap-mschapv2 xauth-generic counters
Listening IP addresses:
  172.25.134.118
Connections:
      aws-31:  172.25.134.118...3.133.9.79  IKEv2
      aws-31:   local:  [3.10.27.218] uses pre-shared key authentication
      aws-31:   remote: [3.133.9.79] uses pre-shared key authentication
      aws-31:   child:  172.25.132.0/22 === 100.64.0.0/10 TUNNEL
      aws-32:  172.25.134.118...3.134.36.6  IKEv2
      aws-32:   local:  [3.10.27.218] uses pre-shared key authentication
      aws-32:   remote: [3.134.36.6] uses pre-shared key authentication
      aws-32:   child:  172.25.132.0/22 === 100.64.0.0/10 TUNNEL
Security Associations (2 up, 0 connecting):
      aws-32[8]: ESTABLISHED 35 minutes ago, 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6]
      aws-32[8]: IKEv2 SPIs: 8aa083f807338fca_i* 2dfefa7c4aeb99c5_r, pre-shared key reauthentication in 18 minutes
      aws-32[8]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-32{30}:  INSTALLED, TUNNEL, reqid 4, ESP in UDP SPIs: cef3383c_i af48ed47_o
      aws-32{30}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 0 bytes_i, 0 bytes_o, rekeying in 7 minutes
      aws-32{30}:   172.25.132.0/22 === 100.64.0.0/10
      aws-31[7]: ESTABLISHED 38 minutes ago, 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79]
      aws-31[7]: IKEv2 SPIs: 8b3dba66d96982a6_i* 45fafb373bf8c6c6_r, pre-shared key reauthentication in 16 minutes
      aws-31[7]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-31{29}:  INSTALLED, TUNNEL, reqid 4, ESP in UDP SPIs: c139e343_i 83ce5f2b_o
      aws-31{29}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 42000 bytes_i (500 pkts, 1s ago), 5964 bytes_o (71 pkts, 450s ago), rekeying in 5 minutes

I am running a ping from 100.64.1.1 to 172.25.134.118. I see the ping requests arriving on 172.25.134.118, getting decrypted, but 172.25.134.118 sends ping responses intermittently.
The traffic flows at all times over the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] tunnel.
I could track back the gaps in sending ping responses to the times when the two SAs are being rekeyed as indicated below (look for "rekeying in 0 seconds"):

Traffic Starts
--------------
Security Associations (2 up, 0 connecting):
      aws-32[14]: ESTABLISHED 28 minutes ago, 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6]
      aws-32[14]: IKEv2 SPIs: b3e0aeb4ef497bf5_i* eb0479254b1fe3ef_r, pre-shared key reauthentication in 28 minutes
      aws-32[14]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-32{52}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: ccf4fc7b_i 3e83309e_o
      aws-32{52}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 0 bytes_i, 0 bytes_o, rekeying in 82 seconds
      aws-32{52}:   172.25.132.0/22 === 100.64.0.0/10
      aws-31[13]: ESTABLISHED 30 minutes ago, 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79]
      aws-31[13]: IKEv2 SPIs: fbf416597b025dac_i* 5da9bf2a73623436_r, pre-shared key reauthentication in 25 minutes
      aws-31[13]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-31{51}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: cc84cff9_i ab156f29_o
      aws-31{51}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 76692 bytes_i (913 pkts, 1s ago), 11172 bytes_o (133 pkts, 816s ago), rekeying in 0 seconds
      aws-31{51}:   172.25.132.0/22 === 100.64.0.0/10                                                                                         ^
                                                                                                                                              |
                                                                                                             traffic starts ------------------+
Traffic Stops
-------------
Security Associations (2 up, 0 connecting):
      aws-32[14]: ESTABLISHED 29 minutes ago, 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6]
      aws-32[14]: IKEv2 SPIs: b3e0aeb4ef497bf5_i* eb0479254b1fe3ef_r, pre-shared key reauthentication in 27 minutes
      aws-32[14]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-32{52}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: ccf4fc7b_i 3e83309e_o
      aws-32{52}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 0 bytes_i, 0 bytes_o (0 pkts, 2s ago), rekeying in 0 seconds
      aws-32{52}:   172.25.132.0/22 === 100.64.0.0/10                                                          ^
                                                                                                               |
                                                                                         traffic stops ----+         
      aws-31[13]: ESTABLISHED 32 minutes ago, 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79]
      aws-31[13]: IKEv2 SPIs: fbf416597b025dac_i* 5da9bf2a73623436_r, pre-shared key reauthentication in 24 minutes
      aws-31[13]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-31{53}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: c8572b72_i 44054996_o
      aws-31{53}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 6888 bytes_i (82 pkts, 0s ago), 6804 bytes_o (81 pkts, 0s ago), rekeying in 12 minutes
      aws-31{53}:   172.25.132.0/22 === 100.64.0.0/10

This seems to be confirmed in the kernel, where the IPsec policy seems to be associated with the IPsec SA that was most recently rekeyed. Whereas the traffic appears to somehow remain always associated to one specific SA.

root@ip-172-25-134-118:~# ip xfrm policy
src 172.25.132.0/22 dst 100.64.0.0/10
        dir out priority 383615
        tmpl src 172.25.134.118 dst 3.134.36.6                                                     <--- policy points to the latest rekeyed SA
                proto esp spi 0xfaa11f79 reqid 6 mode tunnel
src 100.64.0.0/10 dst 172.25.132.0/22
        dir fwd priority 383615
        tmpl src 3.134.36.6 dst 172.25.134.118
                proto esp reqid 6 mode tunnel
src 100.64.0.0/10 dst 172.25.132.0/22
        dir in priority 383615
        tmpl src 3.134.36.6 dst 172.25.134.118
                proto esp reqid 6 mode tunnel

Rekeying happens now:

Security Associations (2 up, 0 connecting):
      aws-32[49]: ESTABLISHED 19 minutes ago, 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6]
      aws-32[49]: IKEv2 SPIs: 110d0156b42d5d74_i* b1421b30302f9caa_r, pre-shared key reauthentication in 35 minutes
      aws-32[49]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-32{188}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: cbc9d297_i faa11f79_o
      aws-32{188}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 0 bytes_i, 0 bytes_o, rekeying in 10 minutes
      aws-32{188}:   172.25.132.0/22 === 100.64.0.0/10
      aws-31[48]: ESTABLISHED 31 minutes ago, 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79]
      aws-31[48]: IKEv2 SPIs: 419a3a9bd4f3bcc2_i* ef6e8850e0573492_r, pre-shared key reauthentication in 24 minutes
      aws-31[48]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-31[48]: Tasks active: CHILD_REKEY
      aws-31{187}:  DELETING, TUNNEL, reqid 6
      aws-31{187}:   172.25.132.0/22 === 100.64.0.0/10
      aws-31{189}:  INSTALLED, TUNNEL, reqid 6, ESP in UDP SPIs: c058615a_i f1a778c9_o
      aws-31{189}:  AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 0 bytes_i, 0 bytes_o, rekeying in 14 minutes      <--- the other SA was just rekeyed
      aws-31{189}:   172.25.132.0/22 === 100.64.0.0/10
root@ip-172-25-134-118:~# ip xfrm policy
src 172.25.132.0/22 dst 100.64.0.0/10
        dir out priority 383615
        tmpl src 172.25.134.118 dst 3.133.9.79                                                                 <--- policy is updated to latest rekeyed SA
                proto esp spi 0xf1a778c9 reqid 6 mode tunnel
src 100.64.0.0/10 dst 172.25.132.0/22
        dir fwd priority 383615
        tmpl src 3.133.9.79 dst 172.25.134.118
                proto esp reqid 6 mode tunnel
src 100.64.0.0/10 dst 172.25.132.0/22
        dir in priority 383615
        tmpl src 3.133.9.79 dst 172.25.134.118
                proto esp reqid 6 mode tunnel    

I also noticed that both SAs use the same reqid, so I have configured the tunnels to use distinct reqid's, but in that case only one tunnel is brought up:

Connections:
      aws-31:  172.25.134.118...3.133.9.79  IKEv2
      aws-31:   local:  [3.10.27.218] uses pre-shared key authentication
      aws-31:   remote: [3.133.9.79] uses pre-shared key authentication
      aws-31:   child:  172.25.132.0/22 === 100.64.0.0/10 TUNNEL
      aws-32:  172.25.134.118...3.134.36.6  IKEv2
      aws-32:   local:  [3.10.27.218] uses pre-shared key authentication
      aws-32:   remote: [3.134.36.6] uses pre-shared key authentication
      aws-32:   child:  172.25.132.0/22 === 100.64.0.0/10 TUNNEL
Security Associations (1 up, 0 connecting):
      aws-31[4]: ESTABLISHED 2 minutes ago, 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79]
      aws-31[4]: IKEv2 SPIs: 0f83cf1763a1446d_i* 834de6a530b7cbfb_r, pre-shared key reauthentication in 52 minutes
      aws-31[4]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
      aws-31{10}:  INSTALLED, TUNNEL, reqid 31, ESP in UDP SPIs: c06ae396_i 24de747f_o
      aws-31{10}:  AES_CBC_256/HMAC_SHA2_256_128, 12096 bytes_i (144 pkts, 1s ago), 12096 bytes_o (144 pkts, 1s ago), rekeying in 11 minutes
      aws-31{10}:   172.25.132.0/22 === 100.64.0.0/10

Any suggestions would be greatly appreciated.

Thanks,
John

History

#1 Updated by Tobias Brunner about 1 year ago

  • Description updated (diff)
  • Category set to configuration
  • Status changed from New to Feedback

but 172.25.134.118 sends ping responses intermittently.

Do you mean it doesn't always send them, or that they just not arrive every time? (Check traffic counters and maybe traffic captures.)

The traffic flows at all times over the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] tunnel.

There will not be any load balancing if that's what you mean. You have SAs with duplicate policies, so only one of the SAs will be used to send traffic (traffic may be received via both SAs, though).

I also noticed that both SAs use the same reqid, so I have configured the tunnels to use distinct reqid's, but in that case only one tunnel is brought up:

Only a single policy for the same traffic selectors can be installed in the kernel, so that will not work unless you set marks (but that requires marking the traffic).

#2 Updated by John Floroiu about 1 year ago

Hi and thanks for the feedback!

To summarize:
- The ping requests always arrive at the strongswan box and are decrypted. The ping responses are only sent out after the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA is rekeyed and until 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] is rekeyed. And the process repeats.
- ip xfrm indicates that the policy is associated with the last rekeyed SA. Otoh the outgoing traffic seems to be pinned to the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA and dropped by the other SA.

You write: "Only a single policy for the same traffic selectors can be installed in the kernel". Fine, but why is the outgoing traffic encrypted by the policy pointing to the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA and not by the policy pointing to the 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] SA when both policies are matching the same TS?

By "will not work unless you set marks (but that requires marking the traffic)" do you mean using a route-based VPN (0.0.0.0/0 === 0.0.0.0/0 selectors and routing over vti's)? If yes, then yes, route-based VPNs do work.

Thanks,
John

#3 Updated by Tobias Brunner about 1 year ago

The ping responses are only sent out after the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA is rekeyed and until 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] is rekeyed.

Are actually no responses sent? Or can the peer just not process them because they are sent via the "wrong" SA (which your statement afterwards seems to indicate)?

Fine, but why is the outgoing traffic encrypted by the policy pointing to the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA and not by the policy pointing to the 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] SA when both policies are matching the same TS?

Not sure what your question aims at exactly. It's simply that the last one installed will be used (check the log). If the peer gets confused if there are multiple SAs, only configure/negotiate one.

By "will not work unless you set marks (but that requires marking the traffic)" do you mean using a route-based VPN (0.0.0.0/0 === 0.0.0.0/0 selectors and routing over vti's)?

No, marks have nothing to do with route-based VPNs (VTIs just also rely on them). You'd have to mark the traffic you want to tunnel through one or the other SA (using socket options or firewall rules - that's what VTIs basically do for you).

#4 Updated by John Floroiu about 1 year ago

Hi!

No responses are sent out from the strongswan host when the policy for 172.25.132.0/22 === 100.64.0.0/10 is associated to the 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] SA (as indicated by ip xfrm policy).

Below is the tcpdump output on the strongswan host showing ping requests arriving and being decrypted (and the sequence repeats itself):

14:35:30.151206 0a:51:ad:0f:2d:4a > 0a:41:80:1c:b8:f4, ethertype IPv4 (0x0800), length 178: (tos 0x0, ttl 232, id 64326, offset 0, flags [none], proto UDP (17), length 164)
3.133.9.79.4500 > 172.25.134.118.4500: [no cksum] UDP-encap: ESP, length 136
14:35:30.151206 0a:51:ad:0f:2d:4a > 0a:41:80:1c:b8:f4, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 3577, offset 0, flags [DF], proto ICMP (1), length 84)
100.64.1.1 > 172.25.134.118: ICMP echo request, id 30408, seq 241, length 64

Ping responses are sent out from the strongswan host (only) when the policy for 172.25.132.0/22 === 100.64.0.0/10 is associated to the 172.25.134.118[3.10.27.218]...3.133.9.79[3.133.9.79] SA. (Remember the IPsec policy association to a SA changes with each SA rekeying, which yields an on-off repeating pattern.)

Below is the tcpdump output showing ping requests arriving and being decrypted and ping responses being sent back. Well, the ICMP echo responses are not shown in the tcpdump output but the last UDP-encap message in the capture is one of them, because they start arriving on the 100.64.1.1 host.

14:49:06.797508 0a:51:ad:0f:2d:4a > 0a:41:80:1c:b8:f4, ethertype IPv4 (0x0800), length 178: (tos 0x0, ttl 232, id 37537, offset 0, flags [none], proto UDP (17), length 164)
3.133.9.79.4500 > 172.25.134.118.4500: [no cksum] UDP-encap: ESP, length 136
14:49:06.797508 0a:51:ad:0f:2d:4a > 0a:41:80:1c:b8:f4, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 35235, offset 0, flags [DF], proto ICMP (1), length 84)
100.64.1.1 > 172.25.134.118: ICMP echo request, id 30408, seq 1039, length 64
14:49:06.797587 0a:41:80:1c:b8:f4 > 0a:51:ad:0f:2d:4a, ethertype IPv4 (0x0800), length 178: (tos 0x0, ttl 64, id 27355, offset 0, flags [none], proto UDP (17), length 164)
172.25.134.118.4500 > 3.133.9.79.4500: [no cksum] UDP-encap: ESP, length 136

Thanks,
John

#5 Updated by Tobias Brunner about 1 year ago

No responses are sent out from the strongswan host when the policy for 172.25.132.0/22 === 100.64.0.0/10 is associated to the 172.25.134.118[3.10.27.218]...3.134.36.6[3.134.36.6] SA (as indicated by ip xfrm policy).

They might get dropped by the firewall. `iptables-save`? Did you configure leftfirewall=yes? Any errors in /proc/net/xfrm_stat (if available) or ip -s xfrm state?

By the way, your account's email address seems to be invalid.

#6 Updated by John Floroiu about 1 year ago

Outputs attached:

root@ip-172-25-134-118:~# iptables-save
  1. Generated by iptables-save v1.6.1 on Fri Nov 15 15:36:25 2019
    *nat
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    COMMIT
  2. Completed on Fri Nov 15 15:36:25 2019
  3. Generated by iptables-save v1.6.1 on Fri Nov 15 15:36:25 2019
    *mangle
    :PREROUTING ACCEPT [4789:574019]
    :INPUT ACCEPT [4789:574019]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [1400:260647]
    :POSTROUTING ACCEPT [1400:260647]
    COMMIT
  4. Completed on Fri Nov 15 15:36:25 2019
  5. Generated by iptables-save v1.6.1 on Fri Nov 15 15:36:25 2019
    *filter
    :INPUT ACCEPT [85150:146893182]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [63367:7774054]
    COMMIT
  6. Completed on Fri Nov 15 15:36:25 2019

root@ip-172-25-134-118:~# cat /proc/net/xfrm_stat
XfrmInError 0
XfrmInBufferError 0
XfrmInHdrError 0
XfrmInNoStates 632
XfrmInStateProtoError 0
XfrmInStateModeError 0
XfrmInStateSeqError 0
XfrmInStateExpired 0
XfrmInStateMismatch 0
XfrmInStateInvalid 0
XfrmInTmplMismatch 19759
XfrmInNoPols 0
XfrmInPolBlock 0
XfrmInPolError 0
XfrmOutError 0
XfrmOutBundleGenError 0
XfrmOutBundleCheckError 0
XfrmOutNoStates 0
XfrmOutStateProtoError 0
XfrmOutStateModeError 0
XfrmOutStateSeqError 0
XfrmOutStateExpired 0
XfrmOutPolBlock 0
XfrmOutPolDead 0
XfrmOutPolError 0
XfrmFwdHdrError 0
XfrmOutStateInvalid 0
XfrmAcquireError 13

root@ip-172-25-134-118:~# ip -s xfrm state
src 172.25.134.118 dst 3.133.9.79
proto esp spi 0xcacce0e4(3402424548) reqid 2(0x00000002) mode tunnel
replay-window 0 seq 0x00000000 flag af-unspec (0x00100000)
auth-trunc hmac(sha256) 0x8faa3266fa65462f48b9b8f2ccc00f18392e92e6e0a4aaeb62cae41dad47503f (256 bits) 128
enc cbc(aes) 0x7fe4abd3a885804980d68e8584aa4b3b32239360b9e3ffdb0e692fe340a5639d (256 bits)
encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
anti-replay context: seq 0x0, oseq 0xce, bitmap 0x00000000
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 948(sec), hard 1200(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
17304(bytes), 206(packets)
add 2019-11-15 15:32:45 use 2019-11-15 15:36:18
stats:
replay-window 0 replay 0 failed 0
src 3.133.9.79 dst 172.25.134.118
proto esp spi 0xc56fd2b2(3312439986) reqid 2(0x00000002) mode tunnel
replay-window 32 seq 0x00000000 flag af-unspec (0x00100000)
auth-trunc hmac(sha256) 0xafc7e7dca1819c77b3ae968dca68e511587b0a74168ca7b089b94f7e48adc671 (256 bits) 128
enc cbc(aes) 0x9226f5385d6ec89048e2cdfe0209c6be0b31d18ec18fbdef0cd028dadb5da7b3 (256 bits)
encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
anti-replay context: seq 0xce, oseq 0x0, bitmap 0xffffffff
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 967(sec), hard 1200(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
17304(bytes), 206(packets)
add 2019-11-15 15:32:45 use 2019-11-15 15:36:18
stats:
replay-window 0 replay 0 failed 0
src 172.25.134.118 dst 3.134.36.6
proto esp spi 0xfdfafbba(4261084090) reqid 2(0x00000002) mode tunnel
replay-window 0 seq 0x00000000 flag af-unspec (0x00100000)
auth-trunc hmac(sha256) 0xc69378d23b27200627c6222092ed210885cade4710039b5cd23256d861ae6f7e (256 bits) 128
enc cbc(aes) 0xd7df8945f97c7a967f61746fcb2756fe2d3adcf33f44020f43890145c0c569bc (256 bits)
encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 992(sec), hard 1200(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
0(bytes), 0(packets)
add 2019-11-15 15:32:36 use -
stats:
replay-window 0 replay 0 failed 0
src 3.134.36.6 dst 172.25.134.118
proto esp spi 0xc1df247b(3252626555) reqid 2(0x00000002) mode tunnel
replay-window 32 seq 0x00000000 flag af-unspec (0x00100000)
auth-trunc hmac(sha256) 0x865614bec2eb192e25d6a92f1a37116c95066b2491610d054a8291ac1c9df8d8 (256 bits) 128
enc cbc(aes) 0x620127fa257e3433e07a93cf4e10ede79839b1db55bb5cb7d9dc4351c279f682 (256 bits)
encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 873(sec), hard 1200(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
0(bytes), 0(packets)
add 2019-11-15 15:32:36 use -
stats:
replay-window 0 replay 0 failed 0

#7 Updated by Tobias Brunner about 1 year ago

XfrmInTmplMismatch 19759

Since the IP addresses of the currently "active" SA are in the template of the inbound policy, receiving packets from the other SA will result in such template mismatches and dropped packets (it's interesting that you see the packet in tcpdump before it gets dropped).

Also available in: Atom PDF