The death of your controller: It was the network engineer in the data center with 802.11ac APs
In my previous post about gigabit speeds in Wi-Fi, I started a discussion about the next wave in wireless LANs as embodied in the forthcoming 802.11ac and 802.11ad standards. Both are interesting, but for different reasons. 802.11ac is explicitly backwards-compatible with existing 802.11a/b/g/n networks, and so it is more evolutionary. (Yes, this does mean that at some point, we’ll start lumping 802.11n with everything else as “legacy.”)
Just as with 802.11n, a good chunk of the excitement comes from having More! Speed! Higher data rates are supported by both existing and novel techniques. In the “old hat” column of familiar techniques, 802.11ac will use OFDM, wider radio channels, and support single-user MIMO. Advances in both chip manufacturing technology and processing power have also made it possible to use more sensitive coding techniques (256-QAM instead of the “mere” 64-QAM in 802.11n), and more aggressive error correction codes that have fewer redundant bits.
Channels continue their expansion. 802.11n began by expanding legacy channels from 20 MHz to 40 MHz; 802.11ac adds 80 MHz and 160 MHz channels. The wider the channel, the more throughput is available, though at the cost of reducing the number of non-overlapping channels that can be used. For a concrete example of channel widths, consider the unlicensed 5 GHz spectrum, which consists of the three Unlicensed National Information Infrastructure (U-NII) bands in the US (5.15 – 5.25, 5.25 – 5.35, and 5.725 – 5.825 GHz), plus a worldwide harmonized band that is available in many more countries (5.47-5.725 GHz). If a wireless LAN uses 80 MHz channels, there are only three to five channels available for 802.11ac:
|Channel width||U-NII channels (USA)||Worldwide harmonized channels||Total channels|
Increasing the amount of spectrum consumed by a channel increases speed, but the real gains in total LAN throughput come from improving efficiency (the number of user bits moved per megahertz).
One promising technique under consideration to improve efficiency is “multi-user” MIMO (MU-MIMO). Existing “single-user” MIMO solutions like 802.11n use multiple radios to “steer” a transmission towards a single client, or in some cases, become selectively deaf to a single client. Multi-user MIMO uses the same steering techniques, but rather than steering a single transmission to a single client, MU-MIMO can steer data streams to multiple clients at the same time. If two client devices are in different directions from the AP, MU-MIMO gives you “spatial diversity.” Simply put, because the transmissions go into different areas, they don’t interfere. Perhaps the best analogy of MU-MIMO is an Ethernet switch, which increases throughput by reducing the network contention between clients.
For its part, 802.11ad has a different, and much more familiar, method of increasing efficiency. Free Space Path Loss (FSPL) is roughly proportional to the square of the frequency, so increasing from 5 GHz to 60 GHz will dramatically reduce range. Smaller ranges mean more APs, but they also are an advantage designing a network for high-density environments with large numbers of APs.
Your controller is dead, Jim. I’m a network engineer, not a miracle worker!
The VHT study work is one of those quiet moments of transition where the world begins change. At first, it is slow and beneath the surface, but opposing this change is like opposing plate tectonics. 802.11n was the first nail in the coffin for the controller because building a controller that worked at 802.11n data rates was extremely difficult, but possible. With 802.11ac & 802.11ad, it’s time to start preparing for the controller’s funeral.
Centralized processing simply does not work in a Gigabit world, especially if you have the high spatial reuse of MU-MIMO or 60 GHz operation.
Back in 2003, it was possible to purchase a security chip that provided 1 Gbps of encrypted throughput, which was big compared to the 54Mbps 802.11a/g networks that were widely used at the time. Flash forward to 2007 and security processors have more than doubled to 2.5G, but wireless LANs are now draft-N at 135 Mbps. By 2010, dual-band 802.11n devices were up to 300 Mbps, and 2011 should bring us new 802.11n devices that run at 450 Mbps.
With the potential for Gigabit draft-11ac devices, there is no way to build a fast enough cryptographic processor to touch every frame flowing through the network. Raw speed is only the start of the problem. Some experts within the 802.11 working group are not sure that the existing cryptographic algorithm, CCMP, can run fast enough to deal with 1 Gbps throughput. It might be necessary to define a new cipher to protect 802.11ac frames, and none of the commonly used security processors in controllers have hardware support for the new encryption ciphers.
The solution? Stop having a controller that touches every encrypted frame in the tsunami of data on your wireless LAN, farm out the security processing to individual APs, and take advantage of the natural parallelism of the network topology. The inexorable momentum of higher speeds is why the rest of the industry is adopting pieces of Aerohive’s architecture. If you’re buying a network, the question is whether you want the original distributed architecture or a knock-off.