ZS – Inodes goes to 0 in /dev/ram2

Home Page Forums Network Management ZeroShell ZS – Inodes goes to 0 in /dev/ram2

This topic contains 6 replies, has 0 voices, and was last updated by  walibix 6 years, 6 months ago.

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #43459

    walibix
    Member

    Hi

    Sorry for my pretty bad english

    Here is your problem , we deploy a ZS (1.0beta16) for captive portal on an open wifi , with more than 150 simultannous users

    All go fine , we encounter 2 massives problems at average of 70 users :

    – First : Inodes (df -i) on /dev/ram2 goes to 0 so ZS can’t create anymore files , crash of zscp process because of lacking of place on /dev/ram2 => massive deconnexions

    => How to expand place on /dev/ram2 , the box already have 2Go so it’s just a parameter of the ramdrive but where ??

    – Second : may be link ? conntrack –buffer-size alert , but we configure conntract_max to 1M+ and have “only” 5k connexions
    We don’t see conntrack error before the inodes dive to 0 , but we have conntrack error after , even if we delete some /tmp file , that’s why i tell maybe it’s link ?

    => why do we have so much little files in /tmp ???

    Thx

    #52484

    imported_fulvio
    Participant

    The / filesystem is prepared and mounted in the initrd image. I’ll fix this issue in the next release.
    What is your Authenticator Validity set in the captive portal gateway configuration? You should increase it to mitigate the issue.

    Regards
    Fulvio

    #52485

    walibix
    Member

    Authentificator validity : 2 minutes

    For /dev/ram2 , we’ve got a “trick” at the moment :
    PreBoot script :

    mount -t tmpfs -o size=1024M tmpfs /tmp

    Inodes jump from 5k to 220k , hoping zscp(or someelse) destroy somes files, this command work with 1.0beta16 and 2.0RC1

    We’re looking for conntrack issue now , the 432000 established is pretty to much high and we already down it to 600 , hoping this error is link with inodes soucy

    Thx for help

    #52486

    imported_fulvio
    Participant

    Please, let me know if you workaround solves the issue definitively so I can use the same solution in the next release.

    Regards
    Fulvio

    #52487

    walibix
    Member

    If we stay on the “actual rate” , monday afternoon we’ll have the answer

    #52488

    walibix
    Member

    Hi

    Mid-day report :

    => inodes temporary solution ok , with 5 minutes popup , 4k max inodes
    => with inodes OK , no conntrack error at the moment
    => 100+ user logged

    before, best we reach was about 77 users

    #52489

    walibix
    Member

    End of day

    more than 130 users connected

    no problem , no conntrack error, no inodes soucy (with 220k inodes it will be surprising 😈 )

    We observe than Inodes jump when server is in load , as we have wave of users , maybe it have amplify the inodes issue

    #52490

    imported_fulvio
    Participant

    Thanks a lot for this test. I’s very important for improving the stability of the captive portal. Next release will include the /tmp as tmpfs.
    Regards
    Fulvio

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.