VPN BONDING using non zeroshell server.

Home Page Forums Network Management VPN VPN BONDING using non zeroshell server.

This topic contains 2 replies, has 0 voices, and was last updated by  houkouonchi 7 years, 1 month ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #43308

    houkouonchi
    Member

    I got this setup because I don’t really have access to my data center zeroshell box anymore and wanted to get it going on my generic dedicated/colo server that does other stuff as well.

    These are all completely generic commands that are for any distro and some of this can be setup using the network configuration files for the distro.

    First off creating the bond interface. For most people this would just be:


    modprobe bonding mode=0
    ifrename -i bond1 -n BOND00

    (done)

    In my case I already had a bonded interface on my server (802.3ad) via multiple gigabit links to the switch on a cisco portchannel. Just on the off chance someone has this setup here is what I did in my init scripts to have a bond0 (generic device for other uses) and BOND00 device:


    modprobe bonding mode=4 xmit_hash_policy=layer3+4 miimon=100 max_bonds=2
    echo 0 > /sys/class/net/bond1/bonding/mode
    ifrename -i bond1 -n BOND00

    I also like to keep the zeroshell naming scheme for my interfaces so I know they are from zeroshell as this server already has a ton of tun adapters from other VPN’s bond0 and eth devices =P

    Next create the VPN interfaces I am using zeroshells naming scheme so VPN00 and VPN01:


    tunctl -t VPN00
    tunctl -t VPN01

    I do this as I don’t want openvpn to dynamically create the suckers everytime its started/stopped which requires removing them/adding them to the bonded interface.

    Make the openvpn config files on server:


    # cat /etc/openvpn/fiosa.conf
    ping 1
    ping-restart 11
    float
    port XXXXX
    proto udp
    local YYYYY
    up /bin/vpn_up.sh
    down /bin/vpn_down.sh
    dev VPN00
    dev-type tap
    cipher none
    secret /etc/openvpn/fiosa.key

    my xxxx is the server side port and yyyy is local ip. I have encryption disabled on mine to lower CPU usage and latency (slightly) but you may not want cipher 0.

    Second config file is the same except different port and VPN01 device and fiosb.key.

    I created the key with:


    openvpn --genkey --secret /etc/openvpn/fiosa.key

    Then I ran the following which gives me the key all on one line which can be used in the PSK section of setting up the VPN on the zeroshell client (using Pre-shared key method for auth):


    grep -v -e # -e Open /etc/openvpn/fiosa.key| tr -d 'n'/; echo

    My simple up/down scripts which are used to set interface stats so if one of the VPN links goes down the bonded link still works correctly without packetloss:


    # cat /bin/vpn_up.sh
    #!/bin/sh
    BR=$3
    DEV=$1
    MTU=$2
    /sbin/ip link set "$DEV" up promisc on mtu "$MTU"

    # cat /bin/vpn_down.sh
    #!/bin/sh
    BR=$3
    DEV=$1
    MTU=$2
    /sbin/ip link set "$DEV" down promisc on mtu "$MTU"

    Add the VPN links to the bonded link:


    ifenslave BOND00 VPN00 VPN00

    Assign IP address to bonded link:


    ifconfig BOND00 172.31.1.1 netmask 255.255.255.0

    (just what I used in this case but again this can be done with config files of network interfaces on distro likely but again just putting the actual commands that work on anything).

    On the zeroshell side I gave BOND00 173.31.1.2


    echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
    echo 1 > /proc/sys/net/ipv4/ip_forward

    Enable forwarding on server-side and disable rp_filter (as I like traceroutes to show the hops)

    Enable NAT on the server:


    iptables -t nat -A POSTROUTING -o bond0 -s 172.31.1.0/24 -j MASQUERADE

    My main interface already on this server is bond0 but in most other cases you would be eth1 or eth0.

    Setup static routes and/or netbalancer like normal on zeroshell side.

    And done!

    Not meant to be a completely howto but something rough to get someone going who needed to accomplish the same thing I did.

    Also its amazing how much crap I have setup on my colo’d server lol… Alot of freaking interfaces =P


    # ip addr
    1: lo: mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth5: mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:25:90:00:13:c3 brd ff:ff:ff:ff:ff:ff
    3: eth4:
    mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:25:90:00:13:c3 brd ff:ff:ff:ff:ff:ff
    4: eth3:
    mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:25:90:00:13:c3 brd ff:ff:ff:ff:ff:ff
    5: eth2: mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:25:90:00:13:c3 brd ff:ff:ff:ff:ff:ff
    6: eth0:
    mtu 1500 qdisc mq qlen 1000
    link/ether 00:30:48:de:ee:4a brd ff:ff:ff:ff:ff:ff
    inet 10.176.18.21/20 brd 10.176.31.255 scope global eth0
    inet6 fe80::230:48ff:fede:ee4a/64 scope link
    valid_lft forever preferred_lft forever
    7: eth1:
    mtu 1500 qdisc mq qlen 1000
    link/ether 00:30:48:de:ee:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fede:ee4b/64 scope link
    valid_lft forever preferred_lft forever
    8: teql0: mtu 1500 qdisc noop qlen 100
    link/void
    9: sit0:
    mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
    10: ip6tnl0:
    mtu 1460 qdisc noop
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
    11: bond0: mtu 1500 qdisc noqueue
    link/ether 00:25:90:00:13:c3 brd ff:ff:ff:ff:ff:ff
    inet 208.97.140.21/24 brd 208.97.140.255 scope global bond0
    inet 208.97.138.21/24 brd 208.97.138.255 scope global bond0:0
    inet 208.97.143.21/24 brd 208.97.143.255 scope global bond0:1
    inet 208.97.141.254/24 brd 208.97.141.255 scope global bond0:2
    inet 208.97.140.22/24 brd 208.97.140.255 scope global secondary bond0:3
    inet6 fe80::225:90ff:fe00:13c3/64 scope link
    valid_lft forever preferred_lft forever
    12: BOND00:
    mtu 1500 qdisc noqueue
    link/ether 9a:0d:48:5b:6b:9b brd ff:ff:ff:ff:ff:ff
    inet 172.31.1.1/24 brd 172.31.1.255 scope global BOND00
    inet6 fe80::200b:e7ff:fea5:f259/64 scope link
    valid_lft forever preferred_lft forever
    13: eth1.100@eth1:
    mtu 1500 qdisc noqueue
    link/ether 00:30:48:de:ee:4b brd ff:ff:ff:ff:ff:ff
    inet 208.97.141.21/24 brd 208.97.141.255 scope global eth1.100
    inet 66.33.216.221/24 brd 66.33.216.255 scope global eth1.100:0
    inet 69.163.188.21/25 brd 69.163.188.127 scope global eth1.100:1
    inet 208.97.141.22/24 brd 208.97.141.255 scope global secondary eth1.100:2
    inet6 2607:f298:1:100:feed:face:beef:d00d/64 scope global
    valid_lft forever preferred_lft forever
    inet6 fe80::230:48ff:fede:ee4b/64 scope link
    valid_lft forever preferred_lft forever
    14: eth1.105@eth1:
    mtu 1500 qdisc noqueue
    link/ether 00:30:48:de:ee:4b brd ff:ff:ff:ff:ff:ff
    inet 75.119.199.211/25 brd 75.119.199.255 scope global eth1.105
    inet6 fe80::230:48ff:fede:ee4b/64 scope link
    valid_lft forever preferred_lft forever
    15: tun5: mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.5.1 peer 172.16.5.2/32 scope global tun5
    16: tun271:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.170.2 peer 172.16.170.1/32 scope global tun271
    17: tun6:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.6.1 peer 172.16.6.2/32 scope global tun6
    18: tun273:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.20.1.2 peer 172.20.1.1/32 scope global tun273
    19: tun8:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.8.1 peer 172.16.8.2/32 scope global tun8
    22: tun2:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.2.1 peer 172.16.2.2/32 scope global tun2
    23: tun256:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 192.168.0.1 peer 192.168.0.2/32 scope global tun256
    24: tun4:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.4.1 peer 172.16.4.2/32 scope global tun4
    25: tun272:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.17.1.2 peer 172.17.1.1/32 scope global tun272
    26: tun12:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.12.1 peer 172.16.12.2/32 scope global tun12
    27: tun7:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.7.1 peer 172.16.7.2/32 scope global tun7
    28: tun11:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.11.1 peer 172.16.11.2/32 scope global tun11
    29: tun257:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 192.168.169.9 peer 192.168.169.10/32 scope global tun257
    30: tun258:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 192.168.200.1 peer 192.168.200.2/32 scope global tun258
    31: tun10:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.10.1 peer 172.16.10.2/32 scope global tun10
    32: tun1:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.1.1 peer 172.16.1.2/32 scope global tun1
    33: tun3:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.3.1 peer 172.16.3.2/32 scope global tun3
    34: tun259:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 192.168.169.1 peer 192.168.169.2/32 scope global tun259
    35: tun260:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 192.168.199.1 peer 192.168.199.2/32 scope global tun260
    36: tun9:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.9.1 peer 172.16.9.2/32 scope global tun9
    37: tun270:
    mtu 1500 qdisc pfifo_fast qlen 100
    link/[65534]
    inet 172.16.51.2 peer 172.16.51.1/32 scope global tun270
    38: vboxnet0: mtu 1500 qdisc noop qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    97: VPN00:
    mtu 1500 qdisc pfifo_fast master BOND00 qlen 100
    link/ether 9a:0d:48:5b:6b:9b brd ff:ff:ff:ff:ff:ff
    98: VPN01:
    mtu 1500 qdisc pfifo_fast master BOND00 qlen 100
    link/ether 9a:0d:48:5b:6b:9b brd ff:ff:ff:ff:ff:ff
    #52269

    wonderaug
    Member

    Hi,

    Your walkthrough is really solid and though i got it working i had some trip-ups 😉 along the way. Here are some of my issues

    – My distro is ubuntu (Lucid 10.04) so ifrename doesn’t work

    – I cannot start my VPNs till i remove the up/down scripts that you created, the log files i created said something about –script-security

    – Once I reboot, I lose my bond interface and even the virtual interface that i created for my secondary ip address. This is the most worrying part for me cos it means i have to run modprobe every time i reboot

    I’m still trying to get NAT working but i think i’ll figure that out soon. For now I’m running my infrastructure in a virtual environment. I would appreciate your help with the challenges I encountered. I think this is a more feasible solution than running zeroshell in a datacenter. Like I said before, solid walkthrough!

    #52270

    houkouonchi
    Member

    I will get stuff working with the newer version of openvpn. My server had an older version and it worked ok but I am seeing similar issues with a newer version.

    All my commands were pretty much the any linux distro howto. You should look at your distro’s files for automatically loading modules @ boot (depends on distro) to get things to get setup at boot or write your own init script. Right now I am having some trouble on my setup now that I enabled jumbo frames on the VPN (about to open another thread about) because that is the only way I could get decent bonding performance. This was not needed when I was bonding two 35/35 connections (for 70/70) but I am now bonding two 155/75 connections (for 310/150) which does need it.

    #52271

    houkouonchi
    Member

    @wonderaug wrote:

    Hi,

    Your walkthrough is really solid and though i got it working i had some trip-ups 😉 along the way. Here are some of my issues

    – My distro is ubuntu (Lucid 10.04) so ifrename doesn’t work

    – I cannot start my VPNs till i remove the up/down scripts that you created, the log files i created said something about –script-security

    – Once I reboot, I lose my bond interface and even the virtual interface that i created for my secondary ip address. This is the most worrying part for me cos it means i have to run modprobe every time i reboot

    I’m still trying to get NAT working but i think i’ll figure that out soon. For now I’m running my infrastructure in a virtual environment. I would appreciate your help with the challenges I encountered. I think this is a more feasible solution than running zeroshell in a datacenter. Like I said before, solid walkthrough!

    Ok so you need to just add:

    script-security 3

    (or 2 should work) in the config file on the server.

    Also not only mode you should also set the up delay so not just:

    echo 0 > /sys/class/net/bond0/bonding/mode

    but also :

    echo 7000 > /sys/class/net/bond0/bonding/updelay

    Or if you renamed to BOND00 like me:

    echo 7000 > /sys/class/net/BOND00/bonding/updelay

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.