Forum Replies Created
Ahh – Looks like a typo in the original instructions – there is no scripts directory..
Instead of the copy statement:
cp -a /Database/vmware-tools/scripts/etc/* /etc
It should be:
cp -a /Database/vmware-tools/startup/etc/* /etc
Mine is working now, though I did have to change the line that starts vmware tools from:
In my case, I’m using VMWare Workstation 7 and it’s version of VMware tools.. It’s not quite working…
First, inside the script that kicks of vmware-tools, that’s in rc.d on mine, not init.d.
Next, after a reboot, it isn’t running. When I manually execute run.sh, I get this:
cp: cannot stat ‘/Database/vmware-tools/scripts/etc/*’: No such file or directory
Warning: Unable to find VMware Tools’s main database /etc/vmware-tools/locations
I’m pretty sure this is exactly what they are talking about. I just now ran across this thread because I too have come across a need to return custom attributes to indicate whether the user has ReadOnly or ReadWrite access to the device. The device in this instance is a Cisco Wireless LAN Controller, which requires a return attribute to let users login…
Thanks for the tip on how to make this change more permanent in my installation.
I agree with the idea of a transparent proxy, in theory. In practice, however, it hasn’t worked out that well for me. There were a few problems when I tried to use a transparent proxy back a year or so ago (with Squid running under pfSense). The only one that I can specifically recall is that iTunes sporadically had trouble with downloads of paid content. I could disable the transparent proxy and iTunes would immediately start working after having failed for hours before.
That worked perfectly. My iPhone is now connected to my local WiFi network with EAP-TLS.
Thanks! That worked great.
Are you planning to include a WebGUI switch for this in future versions? Generally, I expect most people would rather have the transparent feature enabled, but maybe there are enough oddballs like me out there that this would be a decent feature to include.
As it is, I expect this to stop working the next time I reboot ZeroShell, correct? (I’m running the VMware edition of beta 10)
Sounds good – It uses pkcs12 (though I think it may also be able to use .pem).
For a long term fix, it would be nice to have the user’s password be the passphrase that the cert is protected with by default (selectable via a checkbox). I don’t know if the passwords are stored in such a way that you can get them back, though, so that may not be possible (if you are keeping PWs in a one-way hashed form).
I had never thought about it until now, but from a security standpoint, having certs exported with passphrases makes sense. That way if they are distributed insecurely (like via email) and fall into the wrong hands, they aren’t compromised. Of course, if the sender also includes the passphrase in the email, it wouldn’t matter. 🙂 Unfortunately, I’ve seen things like that happen in my work environment.
At my place of work, we have multiple vendors that each seem to use their own hardware and client. We’ve found that you have to experiment with some of the clients to get them to work. With some, the username@domain seems to be the only way to get it to work, but with others the username alone seems to suffice.
I can understand the reasoning behind that and truthfully multiple admins alone wouldn’t solve one issue that we have.
Is there any way to add/edit/delete users other than through the web interface?
The reason I’m asking this is because two different groups of people at my workplace are responsible for different portions of zeroshell. Our security group is responsible for adding/editing/deleting users. Our network operations department is responsible for the access points and troubleshooting and typically has the user set their own password at the time of the installation (after the user has been added by the security department).
Network Operations really does not want the Security department to have access to anything but the user add/edit/delete portion, as they should not need access to anything else.
This has only become a problem now that we’ve actually switched everyone over to WPA Enterprise security using Zeroshell (which was done entirely by the Network Operations group) and it’s about time to hand things over to the Security department. We’d like to provide them with the cleanest, easiest way to do just the portion of the job that they require. If we can accomplish this through another user interface, that would be fine. It would probably also solve the potential issue of someone in the Security group logging into Zeroshell for user maintenance and knocking someone in the Network Operations group out.
Any ideas on what we could do to accomplish our goal?
Is this a feature for the future?
Ok – I tested out TTLS with my wife’s Mac the other night… She’s been running ever since using it without complaint. I only changed the checkbox from PEAP to TTLS and unchecked the MSCHAPv2 box (in the TTLS properties), so it would use PAP, as suggested by the university site you pointed to above. I’m guessing it would probably work with MSCHAP too, but I haven’t tested it yet. Of course, I didn’t have any certificate issues, since it was already a trusted certificate from when I used PEAP.
At any rate, we’ve just started converting clients over to WPA2 Enterpise at my work today, using PEAP to minimize what has to be installed on the client machine. So far, so good.
Well, we figured the problem out. I wish I could say it was some obscure setting on the AP, or a problem with the test client machines, but it was actually really not that hard…
Network Operations is evaluating a product called “Air Defense”, which they thought was running in a passive mode, only listening and reporting on things it heard, like an IDS. But, today we discovered that the Air Defense sensors were actually taking action to terminate the connections of these “rogue” Access Points… Turn off the nearby Air Defense sensor, and suddenly the wireless connections on our test units were rock solid using WPA Enterprise security. The fact is, the area we were testing my home configuration didn’t have a sensor nearby, so it worked great. So, I changed the IP addressing to match our production environment and moved the equipment to where it would actually operate, the sensor would see our “unauthorized traffic” and knock us off every time.
We still haven’t completed sorted this issue, but I’m starting to think it has something to do with interference. In a different office a good distance away from the rest of the APs, we can get WPA authentication via Zeroshell and the Linksys WRT54GS, and it stays up just fine.
Anyone have any suggestions for good replacement APs that have worked well in a decent sized environment w/ WPA Enterprise authentication?
One other unusual thing about this setup: I’m using public IP Addresses on zeroshell and the access points.
I don’t really see how this could cause this behavior, but today I loaded my home config on the zeroshell machine at work and we created a lab environment that pretty much matched my home, as far as IP Addressing goes. It worked fine with our test hardware (laptop, AP, and Zeroshell server).
We then moved it to our production network, changed the IP Addressing of the zeroshell machine and the AP to public IPs and modified the Access Point section in Zeroshell to use the new (public) IP address of the AP (keeping a simple test shared secret) and pointed the AP to the new (public) IP of the zeroshell machine. Logging in via wireless would result in a connection for only a few seconds. I could actually get a few pings from the wireless client to the zeroshell server, then it would drop off.
After this failure, I tried this test:
1. I enabled the DHCP server on the AP
2. I attached zeroshell directly to one of the Ethernet ports on the AP
3. I enabled the NTP server on zeroshell
4. I configured the AP to use zeroshell as its NTP source
5. I disconnected the link to the production network
In this scenario, the production Ethernet switches are out of the loop, as is all Internet connectivity. In this case, the laptop would connect, grab an IP Address from the AP, get a few pings and replies from the wireless client to zeroshell, then lose connectivity.
So, at this point, I’ve eliminated hardware (since it worked fine with my original configuration using private IP Addresses, and the production hardware is not in use). Very minimal config changes were made to move the config to public IP Addresses. I don’t see how private vs. public addresses would cause this problem…
Why isn’t this working??? Anyone else using public IPs on the zeroshell and their AP’s?
I’m back at work and verified that I am seeing those messages in the 802.1X log that match up with the radiusd log.