- This topic is empty.
October 5, 2012 at 7:30 pm #43459
Sorry for my pretty bad english
Here is your problem , we deploy a ZS (1.0beta16) for captive portal on an open wifi , with more than 150 simultannous users
All go fine , we encounter 2 massives problems at average of 70 users :
– First : Inodes (df -i) on /dev/ram2 goes to 0 so ZS can’t create anymore files , crash of zscp process because of lacking of place on /dev/ram2 => massive deconnexions
=> How to expand place on /dev/ram2 , the box already have 2Go so it’s just a parameter of the ramdrive but where ??
– Second : may be link ? conntrack –buffer-size alert , but we configure conntract_max to 1M+ and have “only” 5k connexions
We don’t see conntrack error before the inodes dive to 0 , but we have conntrack error after , even if we delete some /tmp file , that’s why i tell maybe it’s link ?
=> why do we have so much little files in /tmp ???
ThxOctober 6, 2012 at 6:36 am #52484
The / filesystem is prepared and mounted in the initrd image. I’ll fix this issue in the next release.
What is your Authenticator Validity set in the captive portal gateway configuration? You should increase it to mitigate the issue.
FulvioOctober 6, 2012 at 8:10 am #52485
Authentificator validity : 2 minutes
For /dev/ram2 , we’ve got a “trick” at the moment :
PreBoot script :
mount -t tmpfs -o size=1024M tmpfs /tmp
Inodes jump from 5k to 220k , hoping zscp(or someelse) destroy somes files, this command work with 1.0beta16 and 2.0RC1
We’re looking for conntrack issue now , the 432000 established is pretty to much high and we already down it to 600 , hoping this error is link with inodes soucy
Thx for helpOctober 6, 2012 at 11:33 am #52486
Please, let me know if you workaround solves the issue definitively so I can use the same solution in the next release.
FulvioOctober 6, 2012 at 2:44 pm #52487
If we stay on the “actual rate” , monday afternoon we’ll have the answerOctober 8, 2012 at 10:43 am #52488
Mid-day report :
=> inodes temporary solution ok , with 5 minutes popup , 4k max inodes
=> with inodes OK , no conntrack error at the moment
=> 100+ user logged
before, best we reach was about 77 usersOctober 8, 2012 at 4:12 pm #52489
End of day
more than 130 users connected
no problem , no conntrack error, no inodes soucy (with 220k inodes it will be surprising 😈 )
We observe than Inodes jump when server is in load , as we have wave of users , maybe it have amplify the inodes issueOctober 8, 2012 at 5:16 pm #52490
Thanks a lot for this test. I’s very important for improving the stability of the captive portal. Next release will include the /tmp as tmpfs.
- You must be logged in to reply to this topic.