July 3, 2012 at 7:04 pm #43394
Currently running a Lan-to-Lan configuration with ZS on each end.
Each server has the same hardware specs (Sandybridge CPU and Intel NICs), with ESXi 5.0 embedded. ZS is running as a vm on each end with active profiles. One server is in the data center with public IPs on an interface (GigE port) and the other server is at a remote site with Bonded T1s (4.5mbps/4.5mbps) and a VDSL connection (40mbps/5mbps).
Right now we’re not running things with load balancing as I need to implement some sort of adaptive load balancing first (as it will not offer full throughput with a 50/50 split).
The problem I’m having is with the VDSL side. When I run traffic through ZS with the T1s, everything performs great. Not a single hiccup. However, when I run things through the VDSL connection, it acts up at times. At first it performed great without any issues. 1-2 hours later it went crazy and won’t allow more than 1mbps on downstream, and it has packet loss (around 5%). I tried changing settings and at one point I hardcoded all of my interfaces (including the ESXi interfaces) to 100 Full to avoid any duplex issues. When I did that it started working again, so I thought that was it. 1-2 hours later it happened again, packet loss and severely limited downstream throughput.
Firewall rules are default. Everything should be allowed through, and given we don’t have an issue on the T1 connection through ZS, I wouldn’t think its a firewall issue.
Only key difference between the two connections is that the VDSL connection terminates on a router with PPPoE with an MTU of 1480. Part of me thinks it may be an MTU issue, but it’s strange how this issue isn’t 100% consistent. It’s not an issue with the router because I can bypass Zeroshell and not have a single problem. No errors on the interfaces with the exception of the ZS VPN that handles the VDSL connection:
“read UDPv4 [EMSGSIZE Path-MTU=1480]: Message too long (code=90)”
That seems to correlate when it all starts to go to hell and throughput becomes limited.
Should I be adjusting the MTU settings within Zeroshell?
Currently using the ZS beta16 ISO.
We simply have each connection configured within Zeroshell with VPNs, bonded (currently with fault tolerance only). Static routes implemented on each end. Not using Netbalancer.July 5, 2012 at 2:40 pm #52378
Tested it some more, but with load balancing enabled.
It’s almost like clockwork. Performs great for about 1.5-2 hours and then degrades. Seeing this error on the VPN interface (site interfacing with the DSL circuit):
read UDPv4 [EMSGSIZE Path-MTU=1480]: Message too long (code=90)
This error is thrown and shortly after the performance drops for the DSL connection within Zeroshell.
Any advice on MTU settings within Zeroshell? Should I be adjusting the MTU value on the Ethernet interface that’s connected to the DSL router? Everything is set to 1500 by default.July 5, 2012 at 2:53 pm #52379
I think I’ve managed to isolate the issue.
There was info out there regarding OpenVPN and this particular error. The fix appears to be adding the following lines to the OpenVPN configuration file:
Unfortunately there doesn’t appear to be a configuration file within Zeroshell. Is there a way to implement this change with the command line parameter on the VPN itself?
EDIT: Found my answer. Added “–mssfix 1400” to the command line parameter for this particular VPN interface. I’ll test this today and see if it fixes it. If not, I’ll add in the fragment command as well and hope that does it. Stay tuned.July 7, 2012 at 4:15 am #52380
No luck. It still seems to be an MTU issue, but I’m not sure what else I can adjust within Zeroshell to make this work properly.
You must be logged in to reply to this topic.