In theory the zeroshell software is not really going to be your problem.
if you are handling a million users network topology and how you are deploying your OSI model becomes far more important. I highly doubt that deployment to a million users using any single scalable solution is really practical.
At the University that a worked at, we handled 50,000 users and used multiple hardware and software solutions. We had multiple OC lines, with hardware deep packet inspection, with Network registration and layer 2 was handled by an array of NetBSD servers.
Layer 3 was managed by a bunch of managed switches, around 5-6 per building.
With a little bit more information I might be able to give you a few ideas but right now I think your question is a little to vague. Is this set-up in a relatively localized area, or over a distance with point-to-point over fiber? what topology are you planing on deploying? Is deep packet inspection important? Do you need centralization or is decentralization more important?
Let us assume that you are deploying 1,000 nodes with 1,000 users per node. Each interconnected by 10GB OTN/Ethernet, through a Tree/Mesh/Hybrid topology. You would then need 2 servers per node. I would say that it would not be to hard to deploy zeroshell at each node. But, in interconnecting the 1,000 nodes you are going to want something with a little flexibility, something like BSD or Linux servers.
Zeroshell is fabulous for SOHO and S/M Business solutions (/16 subnet), but you are really talking more on the order of WAN/ISP levels, and that is going to have all the complexities that go along with it.