Posted: Sat Jun 10, 2006 12:38 Post subject: Why is a long preamble the default and what does auto do?
I read up on what exactly the preamble in (the physical layer of) wifi is. I found this:
In an 802.11 network, the radio transmitter adds a 144-bit preamble to each packet, including 128 bits that the receiver uses to synchronize the receiver with the transmitter and a 16-bit start-of-frame field. This is followed by a 48-bit header that contains information about the data transfer speed, the length of the data contained in the packet, and an error-checking sequence. This header is called the PHY preamble because it controls the Physical layer of the communications link. Because the header specifies the speed of the data that follows it, the preamble and the header are always transmitted at 1 Mbps.
Therefore, even if a network link is operating at the full 11 Mbps, the effective data transfer speed is considerably slower. In practice, the best you can expect is about 85 percent of the nominal speed. And, of course, the other types of overhead in the data packets reduce the actual speed even more.
That 144-bit preamble is a holdover from the older and slower DSSS systems, and it has stayed in the specification to ensure that 802.11b devices will still be compatible with the older standards, but it really doesn’t accomplish anything useful. So there’s an optional alternative that uses a shorter, 72-bit preamble. In a short preamble, the synchronization field has 56 bits combined with the same 16- bit start-of-frame field used in long preambles. The 72-bit preamble is not compatible with old 802.11 hardware, but that doesn’t matter as long as all the nodes in a network can recognize the short preamble format. In all other respects, a short preamble works just as well as a long one. It takes the network a maximum of 192 milliseconds to handle a long preamble, compared to 96 milliseconds for a short preamble.
In other words, the short preamble cuts the overhead on each packet in half. This makes a significant difference to the actual data throughput, especially for things like streaming audio and video and voice-over-Internet services. Some manufacturers use the long preamble as the default, and others use the short preamble. It’s usually possible to change the preamble length in the configuration software for network adapters and access points.
If I read all of this correctly, long preambles are obsolete and only needed if you are in an environment that requires you to do everything possible to ensure maximum stability of the connection. This would not be the case in most situations.
Also, I ran some tests myself: this is in an environment where connections are not easy, the distance is quite large, on one end I use a 10dBi antenna and 75mW output, on the clients just the small stock rubber antennae.
With the default settings, connections were good. High speeds, almost no errors (indication would be: 100% correct). This is with ACK set to 0, otherwise I have a lot more errors.
Now that I have set preamble to short, nothing has changed, meaning that my connections are still rock stable. When reading the information, I must assume that the overhead on the connection is reduced now, so that my effective transfer rate has gone up.
Seems like a win-win situation: nothing lost, speed gained.
Should therefore not the default setting in DD-WRT be short preamble instead of long?
IIRC, this short preamble feature is in the 11g standard, thus it must work with G only.
It's not part of the 11b standard though, and will be implemented only on a good-will basis, independent of how old your hardware actually is.
IIRC too, there have been firmware releases which has preamble defaulting to "short" or "auto" which caused lots of trouble afterwards (and this setting is well hidden and an average user wouldn't find it immediately, identifying it as the root of the problems she sees)
Thus I vote for keeping "long" as a standard, and leave it to the user to change her setup as much as she's able to understand what this will result in.
Same applies to overclocking - it might be safe for most, but as long as it causes trouble for at least one user (located in the tropical belt or Central Australia) better use the standard of 200/216 - and tcp connection counts.
Well, as for failsafe defaults, I don't know. I think there is a big difference between overclocking for default and this.
Overclocking as a default setting would be ridiculous. But as far as I can tell, most if not all b devices and all g devices will support short preamble. So for those few who would get trouble, we could always make an entry in the help, wiki, etc.
Until just recently, I didn't know what the difference was myself.
I vote to keep the default the method the one that works with 90% of the cards out there.
It took me a long time to get into Linux because I had problems with my wireless cards, if I flashed DD-WRT and it didn't work with my old wireless card (RT8180 chipset, dunno if it would work or not), I would have been burned and probably flashed back to Linksys, writing off DD-WRT as a "that would have been nice" and not tried it again.
It needs maximum compatibility out of the box. _________________ mmm... forbidden donut....