While building my home lab, I found a bit trouble setting up the jumbo frame of MTU 9000 which is supposedly faster than normal frame of MTU 1500. To set it up, I changed the MTU on both ESXi and the Synology DS1513+. The steps involved are pretty simple and straight-forward.
On vSphere Client, just click on the ESXi and then Configuration tab on the right side. Within the Configuration tab, select Networking from left side and a switch diagram shows up. Click the “Properties…” link on top right, vSwitch0 Properties dialog box pops up. Select the portgroup for NFS which I shared it with management, the “Management Network Properties” dialogbox shows up and change the MTU to be 9000 in the “NIC Settings” group. Equally important is to go to the vSwitch Properties dialog box and change MTU in the “Advanced Properties” section, which I did not do the first time.
Lost VMs or Containers? Too Many Consoles? Too Slow GUI? Time to learn how to "Google" and manage your VMware and clouds in a fast and secure HTML5 App.
On the Sysnology side, changing the Jumbo frame is even easier. After logging into the Synology DiskStation Manager, click on the Control Panel and click on the Network icon in the System section. In the Control Panel – Network dialog box, select the “Network Interface” tab and locate the NIC with names like LAN 1, LAN 2, or Bond 1 for aggregated link. Once a NIC is selected, find Jumbo Frame option on the content pane. By default, the choice is “Disable Jumbo Frame, the MTU value is 1500.” To modify it, just click on the drop down list for the last one of 9000 MTU. Don’t forget to save it by clicking on the “Apply” button.
After setting this up, I didn’t find any performance change; or maybe a bit slower than before the change. To dig out the problem, I found a VMware KB article Troubleshooting disk latency when using Jumbo Frames with iSCSI or NFS datastores. It provides very good direction on troubleshooting the problem.
First thing is to log into the ESXi via SSH ( you have to turn it on first as it’s off by default after installation ). Then, try the vmkping with MTU 9000.
# vmkping -s 8784 -d 192.168.1.8
The 8784 is the size after 9000 is deduted by 216 which is the assumed header size. It does not work. To be sure, I also try 9000 instead of 8784 and still does not work. It means the jumbo frame does not work.
There are multiple components on the data path and any of them could cause problem. On the ESXi side, the following two commands can show MTU settings on NICs and virtual switche portgroups:
# esxcfg-nics -l # esxcfg-vswitch -l
The first command showed that my NIC’s MTU is 1500. The second commands showed that my portgroup is 9000 MTU, but virtual switch itself is 1500.
It’s now clear that the NIC is not configured correctly. The problem is there is no command to configure the NIC MTU. It turns out the NIC changes with the virtual switch setting. In other words, if you change the virtual switch MTU, the physical NIC attached to it will follow.
Once the cause is clear, the solution is easy. But while changing the virtual swithc MTU from vSphere Client, I got the Error message:
Call "HostNetworkSystem.UpdateVirtualSwitch" for object "networkSystem-123" on vCenter Server "vCSA" failed. Operation failed, diagnostics report: Unable to set MTU to 9000 the following uplinks refused the MTU setting: vmnic0
It turns out that the physical NIC of one of my servers does not support jumbo frame. There is not much I can do with it unless changing the NIC or trying another server which does work fine with its NIC. A bit further reading on Synology’s forum shows that the performance gain with jumbo frame on ESXi and Synology is really marginally. So I reversed the jumbo frame setting back to normal.