Zeroshell beta 16 on HP ProLiant DL380 G3?

Home Page Forums Network Management ZeroShell Zeroshell beta 16 on HP ProLiant DL380 G3?

This topic contains 2 replies, has 0 voices, and was last updated by  rharrison 7 years, 2 months ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #43171

    rharrison
    Member

    Hi,

    I’ve been trying to install zeroshell to the hard drive on my server, been encountering several problems that I hope someone else can shed some light on.

    I have 2 scsi drives configured in raid 0. So I booted from the livecd.

    Now when I do fdisk -l, only my usb shows up, as /dev/sda1

    I can however run fdisk /udev/cciss/c0d0

    Used option o, w. Mount my usb, and copy the files like the installation docs. Then I go back to fdisk /udev/cciss/c0d0 and set the first partition active.

    Now when booting I get the error:

    kernel panic-not syncing VFS:unable to mount root fs on unknown-block(0,0)

    So i’ve gone back to the livecd, mounted the first partition I just created, and looked at device.map in the grub folder, changed it to /udev/cciss/c0d0.

    I also tried editing grub.conf to include root = /udev/cciss/c0d0p1. I think my fstab on the 2nd partition created needs to be edited as well, but can’t see how to do that properly. Still the kernel panic.

    Does anyone have any ideas?

    #52021

    orlroc
    Member

    Io ho lo stesso problema… stesso server e non riesco ad utlizzare l’hard disk. Crea solo le partizioni (dando errore ma creandole lo stesso) ma non permette di fare nulla all’interno. Nemmeno di creare i profili.

    Qualche aiuto?

    grazie
    orlroc

    #52022
    #52023

    honmanm
    Member

    Adapting the above instructions on a DL360G4p

    Edit #2!

    (I think this will be the same as a DL380 – it’s all about the Smart Array controller and its drivers). And here I’m using ZS beta 16.

    I went a slightly different route to the instructions in the last post.

    First, the system was booted from a ZS CD, with the CF image and ps_initrd.sh files on a FAT32 filesystem on a USB device.

    The CD has the Smart Array (cciss) drivers but they are only loaded after bootup. The logical volumes on the array can be accessed as /udev/cciss/c0d0 etc. Once one has a system booting from the array these device nodes are not created because the cciss driver has to be loaded really early. There may be some kind of penguin-foo that will allow the udev devices to be used throughout.

    I had intended to just copy the CF/IDE/SATA image onto the Smart Array LV, then tweak the initrd following the instructions referenced above – BUT unfortunately the “rootfs” RAMdisk also needs to be changed, which means that one must follow an approach like the USB key preparation (http://www.zeroshell.net/eng/forum/viewtopic.php?t=2971).

    Working in the shell opened from the Zeroshell console menu…

    first, create device files for the Smart Array driver (actually this is optional when booted from the ZS CD, as the device nodes exist under /udev).

    mkdir /dev/cciss
    mknod /dev/cciss/c0d0 b 104 0
    mknod /dev/cciss/c0d0p1 b 104 1
    mknod /dev/cciss/c0d0p2 b 104 2
    mknod /dev/cciss/c0d0p3 b 104 3
    mknod /dev/cciss/c0d0p4 b 104 4
    mknod /dev/cciss/c0d0p5 b 104 5
    mknod /dev/cciss/c0d0p6 b 104 6
    mknod /dev/cciss/c0d0p7 b 104 7

    Now copy the image onto the RAID array, e.g. if the USB device was mounted as /mnt/usbdisk:

    dd if=/mnt/usbdisk/ZeroShell-1.0.beta16-CompactFlash-IDE-USB-SATA-1GB.img.gz of=/dev/cciss/c0d0 bs=1M

    and mount the boot [partition, also copying in ps_initrd.sh

    mkdir /mnt/boot
    mount /dev/cciss/c0d0p1 /mnt/boot
    cp /mnt/usbdisk/ps_initrd.sh /mnt/boot

    Now open the initrd

    cd /mnt/boot
    ./ps_initrd.sh initrd.gz open
    cd initrd.gz-image

    Create cciss device files in in the initrd’s dev as above

    Create directories for drivers, and copy in the cciss driver

    mkdir -p lib/raid/chipsets
    mkdir -p lib/raid/deps
    cp /cdrom/modules/2.6.25.20/kernel/drivers/block/cciss.ko lib/raid/chipsets

    Edit linuxrc
    add the two new directories to the list to be used for loading modules

    for M in  /lib/usb/deps/* /lib/sata/deps/* /lib/raid/deps/* ; do
    /sbin/insmod $M 2>/dev/null
    done
    for M in /lib/usb/host/* /lib/sata/chipsets/* /lib/raid/chipsets/* ; do
    /sbin/insmod $M 2>/dev/null
    done

    and add the following scan of the cciss devices just after the loop which probes the USB and SATA devices

     echo "Probing [$I] RAID Array devices to startup the system ..."
    for N in 0 1 2 3 ; do
    if mount /dev/cciss/c0d${N}p2 /cdrom 2>/dev/null ; then
    DEV=cciss/c0d${N}p
    CDROM=$DEV
    fi
    done

    in the other mount commands in linuxrc, remove the mount type specification (-t iso9660) to allow the “cd” partition to be either a genuine iso9660 file system or any other type.

    Tidy up

    cd /mnt/boot
    ./ps_initrd.sh initrd.gz close

    Now we need to create a new root FS with cciss device nodes in /dev,
    change the file system type for /cdrom, and doctor the storage-detection script to scan /dev rather than /udev.

    The problem here is that the iso9660 CD file system (partition 2) is compressed and an ext3 partition needs to be about 500MB in size to contain the same data as the 174MB CD file system.

    So backup the “CD” and Profiles partitions to the USB device:

    mkdir /mnt/cd
    mkdir /mnt/profiles
    mount /dev/cciss/c0d0p2 /mnt/cd
    mount /dev/cciss/c0d0p3 /mnt/profiles
    cd /mnt/cd
    tar cf /mnt/usbdisk/cd.tar .
    cd /mnt/profiles
    tar cf /mnt/usbdisk/profiles.tar .
    umount /mnt/cd
    umount /mnt/profiles

    then use fdisk to repartition the Smart Array LV (this example has 1GB for the ‘CD’ partition and 3GB for Profiles)

    fdisk /dev/cciss/c0d0p0
    d 3
    d 2
    n p 2 +1GB
    n p 3 +3GB
    w

    If after this the kernel still uses the old partition table you will need to reboot from CD (and recreate the cciss device nodes) before proceeding.

    Create the new file systems and restore the backed-up data (note that the profiles volume needs to be labelled)

    mkfs -t ext3 /dev/cciss/c0d0p2
    mkfs -t ext3 /dev/cciss/c0d0p3
    tune2fs -L Profiles /dev/cciss/c0d0p3
    mount /dev/cciss/c0d0p2 /mnt/cd
    mount /dev/cciss/c0d0p3 /mnt/profiles
    cd /mnt/cd
    tar xfBp /mnt/usbdisk/cd.tar
    cd /mnt/profiles
    tar xf /mnt/usbdisk/profiles.tar

    Now mount the rootfs using ps_initrd

    cd /mnt/cd/isolinux
    /mnt/usbdisk/ps_initrd.sh rootfs open

    Create cciss device nodes again, in the dev directory within rootfs-image.

    Edit the storage scanning script (/mnt/cd/isolinux/rootfs-image/root/kerbynet.cgi/scripts/storage) and change all “/udev” references to “/dev”.

    Edit the fstab file (/mnt/cd/isolinux/rootfs-image/etc/fstab) and change the file system type of /cdrom from iso9660 to ext3. Also change the mount option from ro to rw if desired.

    Close the rootfs, saving changes

    cd /mnt/cd/isolinux
    /mnt/usbdisk/ps_initrd.sh rootfs close

    Reboot the system from disk.

    I think that’s the lot – there was a fair amount of hacking needed to find out how it all works so there may be missing and/or unnecessary steps there.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.