IP Forwarding sysctl causes kernel trace at `dev_disable_lro`

My app needs to enable IP forwarding. This used to work up until a few days ago, but now leads to kernel trace.

[    0.081973] ------------[ cut here ]------------
[    0.081976] netdevice: eth0: failed to disable LRO!
[    0.082010] WARNING: CPU: 0 PID: 517 at net/core/dev.c:1722 dev_disable_lro+0xfc/0x120
[    0.082021] Modules linked in:
[    0.082022] CPU: 0 PID: 517 Comm: sysctl Not tainted 5.12.2 #1
[    0.082025] RIP: 0010:dev_disable_lro+0xfc/0x120
[    0.082027] Code: d6 81 74 14 be 25 00 00 00 4c 89 e7 e8 0d ba e9 ff 48 85 c0 4d 0f 44 ec 48 89 da 4c 89 ee 48 c7 c7 20 95 d8 81 e8 d4 bc aa ff <0f> 0b e9 24 ff ff ff 84 c0 48 c7 c3 29 01 d6 81 74 ba 3c 01 48 c7
[    0.082028] RSP: 0018:ffffc900000ebd30 EFLAGS: 00010282
[    0.082030] RAX: 0000000000000000 RBX: ffffffff81d34f8b RCX: c0000000fffeffff
[    0.082031] RDX: ffffc900000ebae8 RSI: 00000000fffeffff RDI: ffffffff82769f0c
[    0.082032] RBP: ffffc900000ebd48 R08: 0000000000000003 R09: 0000000000000001
[    0.082033] R10: 0000000000000000 R11: ffffc900000ebae0 R12: ffff888003250000
[    0.082034] R13: ffff888003250000 R14: ffffffff82573dd0 R15: 0000000000000001
[    0.082036] FS:  00007f40ca388b48(0000) GS:ffff88800f800000(0000) knlGS:0000000000000000
[    0.082039] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.082040] CR2: 0000561b28f8f5c7 CR3: 00000000041a0000 CR4: 0000000000350eb0
[    0.082041] Call Trace:
[    0.082046]  devinet_sysctl_forward+0x1bd/0x1f0
[    0.082057]  proc_sys_call_handler+0x150/0x210
[    0.082065]  proc_sys_write+0xe/0x10
[    0.082067]  new_sync_write+0x110/0x1b0
[    0.082069]  vfs_write+0x15d/0x240
[    0.082071]  ksys_write+0x59/0xd0
[    0.082073]  __x64_sys_write+0x15/0x20
[    0.082074]  do_syscall_64+0x37/0x50
[    0.082079]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[    0.082084] RIP: 0033:0x7f40ca3473ad
[    0.082085] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
[    0.082087] RSP: 002b:00007ffd92ae8af8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[    0.082088] RAX: ffffffffffffffda RBX: 00007f40ca388b48 RCX: 00007f40ca3473ad
[    0.082089] RDX: 0000000000000001 RSI: 00007f40ca389cb4 RDI: 0000000000000003
[    0.082090] RBP: 00007f40ca389cb4 R08: 0000000000000000 R09: 0000000000000000
[    0.082091] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
[    0.082092] R13: 00007f40ca388b7c R14: 00007f40ca388b48 R15: 0000000000000000
[    0.082094] ---[ end trace 46eb25ffb47e1f45 ]---

IP forwarding is observed to be enabled after this, but kernel does not in fact do IP forwarding despite incoming traffic over Wireguard.

/ # cat /proc/sys/net/ipv4/ip_forward
1

/ # iptables -vL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Huh! The kernel hasn’t changed! Can you get us the app name somehow so we can see where it got scheduled?

app name: wg
vm id: a27d05d1
was scheduled in sjc.

these are the last logs, with some fanfare from nats.

2021-11-09T19:01:58.103 app[a27d05d1] sjc [info] + sysctl -w net.ipv4.ip_forward=1
2021-11-09T19:01:58.132 app[a27d05d1] sjc [info] [#] ip link set mtu 1340 up dev wg0
2021-11-09T19:12:03.800 app[a27d05d1] sjc [info] Reaped child process with pid: 678, exit code: 0
nats: Permissions Violation for Subscription to logs.wg.*.* on connection [243]

We’re digging in now!

1 Like

Not quite sure if there was a manual intervention or something else changed, but this VM started packet forwarding again.

I restarted the VM to see if kernel trace was gone as well. It’s still there but at least the functionality is restored.

No. I’m actually sitting here working on bisecting firecracker tag-by-tag to see if I can find a release at which this broke. Which I guess you just saved me the trouble of doing!

What’s the instance / region where this worked?

Another thing you could check for me is whether large-receive-offload is still locked on, with ethtool -k eth0 (for that matter: though it’s working now, do you still get the kernel LRO trace?)

machine details
alloc id: a27d05d1-3a96-353d-6745-8669bc311c84
region: sjc

large-receive-offload is on and fixed. as you can guess, i’m still getting the LRO related trace.

i don’t know how it managed to get out of a non-forwarding state to normalcy, without even restarting the instance, though.

1 Like

I have some reason to believe the trace doesn’t mean anything, and is just an annoying thing we have to deal with in this current configuration, but if you get an instance I can actually look at into a fucky state again, let me know and I’ll tinker more.

2 Likes