Reply To: Date of a new release

Forums Network Management ZeroShell Date of a new release Reply To: Date of a new release


@atheling wrote:

I’m not sure how one could implement hashing in iptables….

I’ve been doing some searches on Linux routing and load balancing since my last post. Seems like the “Linux Virtual Server” people have been addressing this issue and maybe the kernel mods needed are available for the kernel Zeroshell needs.

One reading starting point is

I read the LVS wiki and even though LVS is doing the opposite from what we are doing, looks like “Connection Scheduling Algorithms” could provide helpful for our case.
@atheling wrote:

I think you only really want to use this level 4 logic on HTTP and HTTPS protocols. SSH, MX, FTP, etc. all are good with successive TCP sessions being on different routes. It is only the “stateless” multiple TCP session type protocols like HTTP/S that have an issue.

Yup you are right, I had some issues with OpenVPN with the static routes, so I created 2 pairs of Netbalancer rules for the destination ports 80 and 443 for the and destination
@atheling wrote:

And for those people who are trying to speed up downloads there are benefits to having different TCP sessions take different interfaces. So you probably don’t want to break that.

This is a con, however it is not breaking the functionality, just like it happens with http/s. As far as I know this is not an expected behaviour, when you have 2 or more links you can take advantage of the full bandwidth only when downloading from multiple sources. Otherwise you will have to use bonding.
@atheling wrote:

By the way, the routing module does have a “route cache” that attempts to channel all traffic to one interface once it has decided on the first packet which interface to use. But looking at it in action it seems to have a very short life. A second or two maybe. (I haven’t found any description of the exact life time of cache entries nor have I found a way to modify its behavior.) For HTTP/S I think you would want to cache the entries for several minutes.

That would be a very nice workaround. I will try to see if we can find a way to raise this value to a more convenient number.