Sunday, November 20, 2011

Wi-Fi Overhead, Part 1: Sources of Overhead #cwapbeta Round Up: The packets never lie Wi-Fi Overhead, Part 2: Solutions to Overhead On 05.02.11, In cwnp_wifi_blog, By Marcus Burton This is the second article in a two-part discussion about WLAN overhead. Part 1 (Sources of Overhead) demonstrated that there are too many sources of overhead on W-Fi networks. Much of the overhead is required for successful protocol operation, but that reality doesn’t make it suck less. In fact, protocol overhead usually causes at least a 50% decrease in actual network throughput when compared with theoretical signaling rates. Ouch. Despite the painful reality of network overhead, there are a handful of important network design steps and configuration settings that can reduce overhead and optimize network performance. When you’re forced to concede over 50% of the capacity to protocol overhead, you should fight to keep everything else. Here’s how. Interference Interference is an inevitable part of a half-duplex wireless medium. However, there are two primary ways to reduce overhead caused by interference: Many sources of non-802.11 interference degrade network performance, but may not halt it altogether. You may have interferers that you don’t know about. Remove non-Wi-Fi transmitters whenever possible (I feel the need to say “duh!”). Reduce WLAN interference by planning and controlling your contention domains. This is a really HUGE topic that is fully explored elsewhere (here’s an example). The best way to decrease “busy medium” time (i.e. overhead) caused by other Wi-Fi devices is to increase the number of contention domains and create better separation between existing contention domains. You can increase the number of contention domains (within limits!) by being smart about AP placement, channel reuse, transmit power settings, antenna selection and RF shaping practices, and client device use. Remember that it’s not just about adding more APs. Find an acceptable balance that allows RF separation and provides enough capacity for each user/application. In addition to normal contention overhead, WLAN interference also causes CRCs and retries. We’ll talk about that in a minute. Controlling the Necessary Functions Most sources of overhead can’t be eliminated. For example, interframe spaces, random backoff, PHY signaling, and MAC headers will always be there. These sources of overhead are a necessary part of the protocol. Accept their existence. But, don’t fall into helpless resignation; here are a few ways to reduce the impact of those necessary functions: An extra long interframe space (EIFS) is observed when STAs receive frames with CRC errors. Better separation of contention domains (less bad interference) will minimize the number of EIFS. In addition to using slow PHY rates for the MAC header and payload, 802.11b stations—AND backward compatible BSSs with 802.11b rates—also use considerably longer PHY signaling processes (PHY preamble and header). We’ll talk about compatibility issues later when we address protection mechanisms, but the primary solution is to get rid of 802.11b. One easy way to reduce MAC (and PHY) overhead is to eliminate unnecessary beacon streams. Instead of separating WLAN services with separate SSIDs, use dynamic user policy assignment practices. Short Guard Intervals 802.11n offered us a number of additional overhead reducing technologies. In Part 1 of this blog series, I mentioned the default 800 ns guard interval. 802.11n allows optional 400 ns guard intervals, which boosts theoretical rates by about 10%. However, avoid using short guard intervals in environments with high reflectivity (e.g. warehouses, manufacturing, industrial environments, etc.). Frame Aggregation and Block Acknowledgments 802.11n also makes much better use of frame aggregation and block acknowledgments (they were introduced with 802.11e, but not really used). Where early WLAN operators were making frames smaller (fragmentation) to avoid collisions, the higher PHY rates of modern networks allow for much larger wireless frames, which drastically improve efficiency. Packing more upper layer data in each frame is the quintessential example of overhead reduction. If each aggregated frame were to be transmitted independently, we’d see much higher overhead from interframe spaces, backoffs, PHY signaling, and MAC headers. When aggregation is used, block acknowledgments are as well. Block acks add to the efficiency improvement by using ack bitmaps to indicate successful reception of multiple frames instead of transmitting individual acks for each received frame. Suck on that, overhead! In most cases, enabling frame aggregation will produce significant capacity improvements. If there’s a configuration option, opt for A-MPDU instead of A-MSDU. It’s more efficient. Protection Mechanisms I mentioned previously that older technologies require more time for PHY signaling. In addition to being slow themselves, legacy stations also hold back the more efficient stations by requiring them to protect their own data transmissions. If you analyze the frame formats of protection frames (like RTS and CTS), you’ll notice that they are actually very small at the MAC layer. However, if you also look at the PHY layer formats (something you should do if you’re working towards CWAP), you’ll notice that a legacy RTS/CTS exchange actually takes a considerable amount of time by virtue of the legacy PHY preamble and PLCP header, which take much too long. You also have to factor in an additional one or two SIFS. That’s why protection mechanisms are not cool. As before, the solution is to get rid of legacy (particularly 802.11b and earlier) clients if your business case allows for it. If you can’t get rid of them, the next best thing may be to use airtime fairness mechanisms to slant the odds in favor of newer technologies. A caveat worth considering: If you are seeing a high number of collisions on your WLAN and it is causing a noticeable performance impact, it may actually be worthwhile to enable RTS/CTS or CTS-to-Self, even if you don’t have legacy clients. The shorter protection frames will reserve the medium, allowing the larger data frame to follow with a lower likelihood of collision. I know, adding overhead to reduce overhead sounds crazy. CRCs and Retries Speaking of collisions, retries are a major source of overhead on many networks. Retries generally result from reception errors caused by interference, but there are a number of other causes. The rotten thing about retransmissions is that the first (failed) attempt already used up some airtime, the second attempt requires a longer backoff period, and retries often cause rate shifting (switching from a higher to lower data rate) to improve reliability. After deploying a network and verifying its performance, you should identify a retry baseline. The goal for retries is (loosely) less than 10%, but as always, the environment and applications should dictate what is acceptable. Retries can be reduced by improving the signal-to-noise ratio (SNR) and reducing interference. Those two design goals bring us full circle back to controlling our RF contention domains with proper AP placement, channel reuse, antenna selection, and power output settings. Data Rates Finally, data rate support is a hot topic in WLAN design. We’ve already discussed 802.11b, and we know that it is bad for our networks. If you must keep 802.11b stations, consider disabling support for 1 and 2 Mbps. When a low data rate is mandatory for the BSS, a lot of airtime is used up by management traffic sent at low rates—these frames must be “receivable” by all stations. Disabling 1 and 2 Mbps is very common. If you don’t support 802.11b at all, you may even be able to disable support for 6 (maybe 12 as well) Mbps, leaving 12 (or 18) Mbps as the lowest rate. I would only do this in a very high density application. In theory, you hope that your stations are never using low rates–because you designed for 24 Mbps and better, remember. In practice, you just can’t control the RF domain with the same exactness as you’d like. Lower rates are useful for reliability. Most environments will be just fine with all OFDM rates enabled. Removing legacy rates and their accompanying PHY signaling is the most important step. Final Comments and Suggestions (FCS) When you know that overhead typically accounts for more than a 50% capacity loss, protecting the remaining capacity seems much more important. There are a lot of sources of overhead, and many of those sources can be kept at bay by designing your network properly and enabling the right features for your environment. Of course, we could talk the overhead topic to death, but not everyone needs to squeeze out every last drop of capacity. Let the applications dictate your design priorities, but don’t let overhead take a big bite out of your wireless capacity. After all, capacity is limited. At a high level, you can identify network overhead problems by comparing your signaling rates with your actual performance. Look for an unusually high amount of 802.11 management and control frames when compared with data frames. In the same way, look at your utilization statistics to see if your lowest rates are using a disproportionate amount of airtime. Also, keep an eye on your retries and CRC errors. As always, thanks for reading! Feel free to share more tips about identifying and controlling WLAN overhead.
Wi-Fi Overhead, Part 1: Sources of Overhead On 04.27.11, In cwnp_wifi_blog, By Marcus Burton Radio communication requires overhead. Network protocols require overhead. Unfortunately, wireless network protocols, like Wi-Fi, are loaded with overhead. Some amount of overhead is necessary for effective communications and interoperability; however, there are also times when overhead is unnecessary. Proper network design and deployment can minimize this overhead and improve network performance. This article kicks off a two-part post that will identify the sources of overhead (part 1) on WLANs and then provide some recommendations for reducing it (part 2). Interference — In a very generic sense, all sources of interference (non-802.11 and 802.11) create overhead. 802.11 devices must perform a clear channel assessment (CCA) to determine whether the wireless medium is busy or idle. Whenever the medium is busy, Wi-Fi stations twiddle their thumbs and wait. Non-802.11 sources of RF (above a certain threshold) can be thought of as a source of overhead. Interframe Spaces — Before every Wi-Fi transmission, there is an idle period on the medium. This idle period is called an interframe space (IFS). There are multiple different interframe space lengths; the purpose of an IFS is to regulate conversation flow and provide priority for certain types of transmissions. Random Backoff —When 802.11 devices are contending for access to the wireless medium, they use a backoff algorithm that randomizes access to the medium. This process ultimately reduces collisions and is one way to achieve QoS prioritization. The random backoff time represents a number of “slots” (or, periods of time) that the wireless medium must be idle. PHY Signaling — Radio communication peers must perform some type of synchronization for reliable reception of frames. The PHY Preamble is a series of bits used to perform this function. The PLCP Header follows the preamble and communicates the attributes of the following frame to the receiver so that the receiver knows how to process the data. In other words, the preamble and PLCP header are collectively the same as if a person were to say: “OK, I’m getting ready to say something. It’s going to take me 20 seconds to say it and I’m going to say it pretty quickly, so listen up. Here goes.” MAC Header — A complex protocol like Wi-Fi requires that stations can coordinate their operation. The MAC header is used to coordinate supported (or unsupported) features and functions. It is necessary, but it is still overhead. In data frames, the MAC header is so small (and usually transmitted at a high data rate) that it barely qualifies as overhead. In other frames, such as 802.11n beacons, the “overhead” designation is much more applicable. Guard Intervals — Between each 802.11 symbol, there must be a quiet period, called a guard interval, on the medium to allow previous symbols to “settle.” Without this quiet period, a symbol may interfere with the previous symbol (inter-symbol interference). The normal guard interval setting for Wi-Fi is 800 ns. Acknowledgements — Since wireless communication is inherently unreliable and lossy, many frame types require acknowledgement. Acks are overhead in and of themselves, and they also require an additional interframe space (i.e. SIFS). Fragmentation (and small payloads) — While frames with small payloads are not an actual source of overhead, they are often an inefficient use of the medium, making the overhead problem more apparent. A similar problem exists when organizations enable frame fragmentation to attempt to reduce collisions (smaller frames are less likely to experience interference) in a noisy environment. Fragmentation is not used often in today’s networks because it rarely produces a benefit. Instead, it usually adds overhead. Protection Mechanisms — Incompatible PHY formats (such as 802.11b and 802.11g) require protection for proper coexistence. Protection is usually achieved via an RTS/CTS exchange or a CTS-to-Self frame prior to the transmission of data. There are other types of protection (for 802.11n) that may be used as well. In addition to using these frame exchanges for protection, there are other times (such as when there are hidden nodes) when they may be enabled to attempt to improve overall network health. Retransmissions — When a transmitted frame is not received properly by the intended recipient (or not acknowledged), a retransmission may be required. Transmitting a frame more than once is an obvious, and significant, source of overhead. When the frame is queued for retransmission, other sources of overhead (such as random backoff, IFS, etc.) are duplicated as well. Retransmissions represent one of the most problematic sources of overhead on our networks because they are one type of overhead that we can influence (with proper design). Shall we go on? — If we wanted to extend the article, we could also break down the upper layers of the protocol stack as well, looking at the overhead inherent in each protocol. However, from the perspective of Wi-Fi (a Layer 1 and 2 protocol), the Layer 3-7 data is considered to be the payload. Complete reductionists might say that the application data is the only real payload that is not “overhead,” and I’d tend to agree. Since we’re focused only on Wi-Fi here, I won’t take it to that extreme. Final Comments and Suggestions (FCS) The Wi-Fi protocol is bloated with overhead. I don’t fault the engineers who designed the protocol for that. Any radio communication protocol will require some overhead. Broad WLAN adoption and use cases have made the 802.11 protocols very successful, but that success requires a lot of engineering complexity. Complexity requires coordination, and that coordination usually shows up as overhead. Network engineers should understand the sources of overhead on their networks and, within the limitations of their use case, seek to reduce overhead when possible.

Saturday, November 19, 2011

The approaching wave of Wireless. - Jerome Henry

So you thought that 802.11n was the ultimate protocol, allowing 300 Mbps, maybe up to 450 Mbps? If you are not working in wireless yet, now is the right time to think about switching career, and starting to get a few certifications in the 802.11 wireless field: new protocols are coming that will change the deal for a long time, and make 802.11 THE protocol you want to be expert on. So get some Cisco wireless training and prepare for the storm to come: 802.11ac is the first big one. This amendment is planned for end of 2012, and will increase wireless speeds in the 5 GHz band beyond the 1 Gbps bar. Its also brings very clever enhancements. For example, 70 % of the cell traffic is from the AP to the clients. Knowing that, 802.11ac has mechanisms where the AP could use a 160 MHz wide channel, and allocate sub-sections of this mega channel to groups of clients, allowing several clients to communicate at the same time with the AP! When 802.11ac will come out, new APs with up to 8 antennas will appear on the market, and your favorite wireless hardware vendor will have a few golden years ahead of them to replace the old 802.11n APs! 802.11ad does not make a lot of noise, but is an important amendment, to be released also by the end of 2012. 802.11ad brings the 802.11 protocol to the 57-66 GHz band. Why there? Because this is the range of frequencies your home devices will use to communicate. With 802.11ad, your TVs, hifi system, speakers and any other electronic device in your home will be able to communicate and exchange data. This way, you will be able to watch a movie on a TV, hearing the sound wirelessly through your HD speakers, then move the movie to another TV seamlessly. This may look like geeky accessories, but soon you will see a booming demand of 802.11 professional to install, maintain and troubleshoot home systems. 802.11ah brings 802.11 below 1 GHz, into the many unlicensed bands available at these low frequencies. This allows 802.11 to be used at longer range, and send 802.11 signals along highways to provide tons of information to travelling users, but also to communicate over several miles from one antenna. Throughput will not be very high (100kbps is the target), but there are countless businesses who employ regional employees and need to stay in touch with them and send data. Here again, a big demand for 802.11 professionals will appear as theses systems are sold and deployed… worldwide. 802.11ah should come in 2014. 802.11af is a great scavenger amendment. TVs signals were analog, and became digital. At the same time, they changed frequency. There is a rich collection of low frequencies abandoned by analog TV that are available for who wants them… and 802.11af is there to take them! 802.11af should be published some time in 2014. Its exact scope is still changing, but every day sees new possible applications, from internet for rural areas, to monitoring sensors reporting to central stations, or nationwide alert systems… and all that, using the good 802.11 protocol. All this is exciting! 802.11ac will be the first wave driving sales and demand, and the other amendments to follow will deepen the need for wireless expertise. So start your journey and join us in the 802.11 world!

Why am i here??

The primary reason for not creating a blog was "There is already so much information around why waste time in adding to the pool, when you still have so much to discover in the wireless ocean."

But why i have started is, because i am thinking of pursuing a long term career in wireless networking and after 2 years in core 802.11 wireless field i definitely have something useful to contribute to the most promising technology of the future.
Its a privileged position to be the first one to help people with the Cisco wireless equipment.
I do have first hand information that i have learned being the protege of greats like Aaron Leonard and Jerome Henry but most of my stuff will be the crucial links and information i have found around, while learning the most interesting technology in the world.

Last but not the least, i have learned to touch type and can punch in those long stories faster now. :)

I hope its going to be fun and helpful to everyone around.