PatrickB

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 30 total)
  • Author
    Posts
  • in reply to: [FIXED] the (monitoring) message spooler not sending emails #54563

    PatrickB
    Member

    I confirm that the fix is integrated in release 3.8.1, then I could remove my mod from the PostBoot script.

    The messages are sent as needed.

    In addition I did some surgery to enlarge the partition “Profiles” from 1.1Gb to 2Gb and as a result it is only 52% in use now.

    I did it before upgrading ZS, and the ratio of 52% kept stable across the operation. Then I hope the AutoUpdate will not colonize the new area…

    Actually I’d like to find some doc about how the room is managed there and what causes to purge items. I only have 3 addons and quite no logs, so getting stuffed till Samba crashes was very annoying πŸ‘Ώ

    Thank you in advance.
    Best regards.

    in reply to: [FIXED] the (monitoring) message spooler not sending emails #54562

    PatrickB
    Member

    Hello.

    I asked several persons, nobody could explain me why the trailing underscore causes an error saying “value too great for base”.

    Anyway removing it keeps only digits, then does the job.

    Forcing base10 in the arithmetic operation $(( )) would be required only if the first digit could be a zero, because a leading zero means octal and there are digits 8 and 9 in the numbers.

    If there cannot be a leading zero, then no need to force base10.

    Actually fixing this bug has a little side-effect: now my couple of Alix spams me with DISK FULL alerts due to the profile crowded with downloaded modules (of several megs) and I’m searching how to purge that safely:

    /DB/_DB.002/var/register/system/AutoUpdate/pkgs/…

    Also I could see that the partitions don’t use the whole compact disk, then I expect to enlarge this one.

    😈

    Thank you for your work.
    Have a nice day.

    in reply to: [FIXED] the (monitoring) message spooler not sending emails #54560

    PatrickB
    Member

    The spooler is made of the script spoolerd and it is not running because actually it aborts when I try to run it manually.

    The previously queued file was named:
    1_1508439602_Emergency_b031d51f_DISKFULL

    …and the email was properly sent

    Another one is named:

    7_949653234_Info_18c8f180_STARTED

    …and causes this error:

    ./spoolerd: line 28: 949653234_: value too great for base (error token is “949653234_”)

      MESSAGES="`ls -d * 2>/dev/null`"
    for M in $MESSAGES ; do
    TS="${M:2:10}"
    SEVERITY="`echo $M | awk -F_ '{print $3}'`"
    SUBJECT="`cat $M/Subject 2>/dev/null`"
    TYPE="`cat $M/Type 2>/dev/null`"
    MRECIPIENT="`cat $M/Recipient 2>/dev/null`"
    if [ "$TYPE" = Recipient ] ; then
    TYPE="`cat $CONFIG/Recipients/$MRECIPIENT/Type`"
    fi
    ID="`echo $M | awk -F_ '{print $4}'`"
    EVENT="`echo $M | awk -F_ '{print $5}'`"
    NOW=`date +%s`
    if [ $((NOW-TS)) -gt $MAXAGE ] ; then <
    $SCRIPTS/alerts_logger "$ID" "$EVENT ($SEVERITY): message expired."

    I guess that this is a timestamp assumed to use 10 digits then at early hours it is shorter, unless a change is done to left pad it with a zero.

    But if doing so, the test must force base10 to avoid another surprise with octal faults at 8 and 9 o’clock:
    if [ $(( $NOW – 10#$TS )) -gt $MAXAGE ] ; then

    Actually in this case it can be fixed a simplier way by just removing the potential trailing underscore:

      MESSAGES="`ls -d * 2>/dev/null`"
    for M in $MESSAGES ; do
    TS="${M:2:10}"
    TS="${TS%%_}" <
    SEVERITY="`echo $M | awk -F_ '{print $3}'`"
    ...

    Until this is integrated, it is necessary to replace spoolerd with a fixed copy in the PostBoot script, then run alerts_start again because it has already crashed.

    But this fixes the issue, ps -A shows the spoolerd running and the emails are sent. 8)

    Hope it helps, Best regards.

    in reply to: [FIXED] the (monitoring) message spooler not sending emails #54559

    PatrickB
    Member

    I can get the messages currently spooled all sent without rebooting, for instance by changing the parameter “max age” of the monitoring and saving. Doing so must restart the spooler service specifically.

    The SMTP used is running full-time on another machine on the LAN, using no SSL, and actually, as I said, it works fine when the spooler wants so flush its queue. Also with the test message feature.

    Since all is on LAN side there is no firewall restriction between the ZS and the SMTP. Be sure that I did test all of that. Else it would never work.

    The issue is the spooler queueing definitively until it is restarted, this is why I’d like to find how the spooler works and what makes it flush its queue.

    If I understand you well, the spooler would be made of a infinite sleep-and-do loop in a script ? Why not… In this case it means that the criterion to enter the message sending clause is never met.

    OK I will try to locate such a script within the others.

    Thanks, Best regards.

    in reply to: [FIXED] the (monitoring) message spooler not sending emails #54557

    PatrickB
    Member

    Thanks but…

    As I explained, the sending of spooled messages works fine when restarting the spooler (at reboot notably).

    The problem is that the same spooled messages are never sent before.

    Actually there is nothing related in the crontab. I had a look at crontabgen and commented the removal of /tmp/crontab: there is code to generate the events what means spool them, OK this part works…

    Then I’m looking for clues about how the spooler is made and how to make it flush periodically.

    Thanks, Best regards.


    PatrickB
    Member

    Hi iulyb.

    Sorry for the delay…

    I downloaded and explored your code. It shows how to interface cleanly with the Kerbynet, with the session key etc. and how to plug new pages into the existing GUI.

    It is certainly much better and cleaner than what I’m trying to do with an extra path in the URL.

    I will try to redo my thing this way asap and tell.

    The first goal is to have a panel with buttons to trigger a Wake on Lan on the machines of the LAN.
    Maybe also a status active/sleeping, not sure it is useful…

    I need it because it is safer than a kind of “Wake on Internet”, and the feature is necessary with a VPN.

    Thanks & Best regards.


    PatrickB
    Member

    Bad news, the 2 softlinks are not part of Zeroshell in v.3.6.0.

    I will have to patch this one again.

    in reply to: Import users from csv file #53243

    PatrickB
    Member

    Hello.

    This may help you: it is not as simple as CSV, but you should be able to generate the right data format from your CSV and inject into the LDAP this way.

    ❗ Care of doing a backup as I do before, in case of…

    https://www.zeroshell.org/forum/viewtopic.php?t=5060

    Best regards.


    PatrickB
    Member

    This is a temporary patch since any regular update of Zeroshell’s core will crush it.

    https://www.zeroshell.org/forum/viewtopic.php?t=5593

    I will start experimenting the solutions with that framework.


    PatrickB
    Member

    Hello.

    Back on this topic.

    The next challenge is to create a small Web interface in the LAN to display the state of stations and send them directives like Wake-on-LAN, shutdown etc.
    Naturally it must be provided by the LAN Master, that is: the ZS box (Alix in my case).

    Looking at the insides, we have:

    root@janus1> ll /cdrom/usr/local/apache2/htdocs
    total 3.5K
    lr-xr-xr-x 1 root root 43 Mar 3 06:19 bwd -> /var/register/system/bandwidthd/work/htdocs
    lr-xr-xr-x 1 root root 46 Mar 3 06:19 cp_image -> /var/register/system/cp/Auth/Custom/Image/File
    -r--r--r-- 1 root root 1.1K Mar 3 06:15 default.css
    lr-xr-xr-x 1 root root 12 Mar 3 06:19 img.tpl -> /tmp/img.tpl
    -r-xr-xr-x 1 root root 867 Mar 3 06:15 index.html
    dr-xr-xr-x 2 root root 2.0K Mar 3 06:15 kerbynet
    -r--r--r-- 1 root root 26 Mar 3 06:15 robots.txt

    root@janus1> ll /cdrom/usr/local/apache2/cgi-bin
    total 731K
    -r--r--r-- 1 root root 1.0K Mar 3 06:15 .rnd
    -r-xr-xr-x 1 root root 1.9M Mar 3 06:15 kerbynet

    All that in read-only.
    It is possible to create a file /tmp/img.tpl/index.html and it will show at URL https:///img.tpl/
    Then what command can it trigger ?

    None of course ! πŸ˜†
    By looking at the URLs from various pages of the ZS GUI, I tried variants of such a thing:

    https:///cgi-bin/kerbynet?Section=NoAuthREQ&Action=Render&Object=TestingScript&ScriptName=test

    …but Mr kerbynet refuses to cooperate πŸ˜› and I understand that…

    What to do then ?

    The custom (set of) pages must be located in a subfolder of htdocs/ and they must invoke a different CGI script written under full responsibility of the people doing that, of course.

    Actually they must be located somewhere else, typically under /opt, and linked to htdocs/ and cgi-bin/ with “optional in case of” links like the existing img.tpl/.

    The challenge now is to modify the iso9660 then reinsert it into the CF card to make these 2 links appear there 😑

    The request

    Logically the request is for such a new feature: 2 “in case of” soft links that could be set this way (the word ‘tools’ looks generic enough to fit any need):

    /cdrom/usr/local/apache2/htdocs/tools -> /opt/webtools/htdocs

    /cdrom/usr/local/apache2/cgi-bin/tools -> /opt/webtools/cgi-bin

    Nothing more, the User will be responsible for creating or not the 2 pointed directories and, if it does it, to secure what he will put inside.

    Your positions ? An idea to do it another way ?

    Thanks, Best regards.

    in reply to: Need more control on the Local CA parameters #53745

    PatrickB
    Member

    Hello.

    I can confirm that I could take the control of the SAN of the host certificate by adjusting x509_createDefaultCert this way:

    At first I copied the script to some /opt/mods and updated the PostBoot script to replace the original one with it at every reboot (like other mods).

    Then I did that inside (simple hardcode, I could also read them from a file):

    ...
    openssl req -new -batch -newkey rsa:$NBIT -nodes -out /tmp/x509default.req -keyout /tmp/x509default.key -days $DAYS -sha512 -subj "/OU=Hosts/CN=$HOSTNAME"
    TMP=/tmp/x509_extfile_defaulthost
    cat $SSLDIR/extensions > $TMP
    echo -n "subjectAltName = DNS:`hostname`" >> $TMP
    # No I don't want the IP in the certificate:
    # find /var/register/system/net/interfaces/ -name IP -type f -exec awk '{printf(", IP:%s",$0)}' {} ; >> $TMP
    # Instead I want some more names:
    echo -n ", DNS:janus.mydomain.lan, DNS:lan.mydomain.org" >> $TMP
    echo >> $TMP
    openssl ca -batch -days $DAYS -in /tmp/x509default.req -out /tmp/x509default.cert -extfile $TMP -extensions host
    ...

    I use XCA on a separate machine to generate all my certificates and their keys. Notably the intermediate CA (and its key) for the ZS. The master CA (which signed the intermediate CA) is just imported as a trusted certificate, without its ultra-precious key of course ! πŸ˜›

    When importing the intermediate CA, the ZS process regenerates the host certificate and thanks to the change, it has the SAN I need. After a reboot, all is clean 8)

    At this time I did not try all the scenarios of certificate renewal from the ZS GUI. I hope there are no other paths likely to bring back the original pattern πŸ‘Ώ

    In this case, a more aggressive solution 😈 would be to abandon the whole certificate management from the GUI and code a tool script this way:
    https://www.zeroshell.org/forum/viewtopic.php?t=5061
    …to import all the certificates from outside and force them into the right places inside ZS. I hope I will escape that 😑

    NB: While doing such things, on the computer used to access ZS’ GUI your browser may become very boring, especially Firefox (it does its job) with certificate errors, and even forbid you to complete the operation 😈
    Internet Explorer is a bit more permissive: it screams but always has an option to bypass… On Firefox, you may have to purge the (local) certificate database: a file named cert8.db in the profile + the cache. After that you just will have to reimport your personal certificates: added CA etc. this is not lethal.

    Best regards.

    in reply to: Need more control on the Local CA parameters #53744

    PatrickB
    Member

    Good morning & Happy New Year !

    Yes there are people tuning their ZS early on New Year Day πŸ˜†

    Thanks Garfield, It helped me to find some items from my initial questions.

    * The source data for the CA appear to be there:

    root@janus2> ll /var/register/system/ssl/ca
    total 32K
    -rw-r–r– 1 root root 6 Sep 28 2004 StateOrProvince
    -rw-r–r– 1 root root 3 Jan 29 2015 WebExport
    -rw-r–r– 1 root root 4 Nov 19 2004 countryName
    -rw-r–r– 1 root root 3 Jan 29 2015 days
    -rw-r–r– 1 root root 4 Jan 29 2015 keysize
    -rw-r–r– 1 root root 9 Sep 28 2004 localityName
    -rw-r–r– 1 root root 5 Sep 28 2004 organizationName
    -rw-r–r– 1 root root 7 Sep 28 2004 organizationalUnitName

    As you can see, most of them were not updated when I installed my own LocalCA, this is a bit dirty, but not critical…

    * What is used from the CA info:

    The ‘kerbynet.cgi/scripts/x509_createAdminCert’ and ‘…/x509_createDefaultCert’ both use only:

    NBIT=`cat $REGISTER/system/ssl/ca/keysize 2>/dev/null`
    DAYS=`cat $REGISTER/system/ssl/ca/days 2>/dev/null`
    [ -z "$NBIT" ] && NBIT=1024
    [ -z "$DAYS" ] && DAYS=365

    * How the SAN is built (by ‘…/x509_createDefaultCert’):

    echo -n "subjectAltName = DNS:`hostname`" >> $TMP
    find /var/register/system/net/interfaces/ -name IP -type f -exec awk '{printf(", IP:%s",$0)}' {} ; >> $TMP

    This is awful because:
    – the files “…/net/interfaces/…/IP” appears to have kept obsolete IP addresses,
    – this can disclose my LAN side IP’s in certificates made for WAN side,
    – this will not let me specify a wanted name for a given certificate.

    How to change that ?

    It is always the problem with GUIs: you need to either hardcode or write a complex editor for data that most of people won’t care of, so it is often painful job for nothing…

    The simplest solution is always declarative, in the form of a template file located in a known place, with severe warnings to edit it, either by hand or with a GUI file editor…

    For the set of IP addresses, I understand the need to fetch them in the system, but it should be done only once when changes are done in the network structure, and the admin should be able to willingly copy and filter the result.

    OK, now I have found where to hack to have clean certificates.

    More ideas for usability

    Directly using opensssl is tricky, but there is a nice software named XCA that enables anybody a bit aware of the principles to safely and cleanly manage all his SSL data.

    Wouldn’t it be simpler to enable to import all the SSL items into ZS this way ? Actually ZS wants to have its CA and to make its host and admin certificates by itself… with the issues above.

    And since I have twin ZS systems, you imagine that it means 2 CAs πŸ™‚

    This leads me to remind this question, I’d like to have your positions:
    https://www.zeroshell.org/forum/viewtopic.php?t=4904

    Thanks, Best regards.

    in reply to: Need more control on the Local CA parameters #53742

    PatrickB
    Member

    Hello.

    I tried again but could not find.

    In /etc/ssl/openssl.cnf the directives for SAN are all commented:

    # This stuff is for subjectAltName and issuerAltname.
    # Import the email address.
    # subjectAltName=email:copy
    # An alternative to produce certificates that aren't
    # deprecated according to PKIX.
    # subjectAltName=email:move

    There is no section alt_names…

    But the host certificates generated have a SAN with IPs, some of them removed ages ago, and no longer existing in the DNS area:

    X509v3 extensions:
    X509v3 Extended Key Usage:
    TLS Web Server Authentication, TLS Web Client Authentication
    X509v3 Subject Alternative Name:
    DNS:zzzzz.domain.tld, IP Address:192.168.yyy.2, IP Address:192.168.xxx.1, IP Address:192.168.xxx.2, IP Address:192.168.xxx.4, IP...

    Did somebody find where and how it does that ? Of course it works, but this is messy.

    Thanks, Best regards.

    in reply to: import file LDIF #53495

    PatrickB
    Member

    Hello.

    If you are ready to try using a shell, you may find interesting elements there:
    https://www.zeroshell.org/forum/viewtopic.php?t=5060

    At least it will help you to diagnose the issue.

    Best regards.

    in reply to: Support for Alix LEDs #46622

    PatrickB
    Member

    Hello.

    I downloaded the program and the source file at:
    http://linux.1wt.eu/alix/util/alixled/

    Very bad surprise 😯

    Visibly the code was “simplified” from the how-to provided 3 posts above !

    All what it is able to do now is to blink the first led at 1Hz or 10Hz, depending of the presence of any argument.

    Then I explored the parent directories and downloaded the archives there:
    http://linux.1wt.eu/alix/util/

    There is a compiled program alix-leds and its source file.

    This one is able to address the 3 leds and use them in various ways:
    – blink slow or fast,
    – show the CPU load or the disk activity,
    – show the state of the network interfaces…

    Unfortunately the state of network is just up/down, not flashing with the activity, so it is quite perfectly useless – unless you have a process randomly touching your NICs and need to monitor how you are configured πŸ˜†

    I could only use it to turn a led on or off by showing the state of a network interface that is always up, or one that does not exist πŸ˜†

    And after that, it is necessary to kill the process because there is not even an option to set a led and exit πŸ˜₯

    The GPIO working on Alix will be really welcome πŸ™

Viewing 15 posts - 1 through 15 (of 30 total)