Monday, October 19, 2009

White Space Networking with Wi-Fi like Connectivity

This paper describes a MAC protocol for Wi-Fi-like whitespace networking ("WhiteFi"). The primary challenge is to choose and maintain choices of channels that avoid interference with existing users of the channel. To achieve this goal the hardware structure of a WhiteFi node includes both a scanner that drives a software-defined radio (used for access point detection and to detect ‘incumbents’ of channels) as well as a Wi-Fi-like radio.

To assign channels, the authors measure the utilization of each 6MHz frequency range and the number of active access points on each. Based on this, they estimate the share of bandwidth each node can expect receive and send these measurements to the access point. The AP choses a channel by aggregating these measurements (after taking into account which channels are used by incumbents seen by any end host or the AP).

To allow devices to quickly switch channels after incumbents are detected, a backup channel to which the AP listens (using its scanner radio) and clients fall back to in quiet periods is used.

The authors evaluated their active channel detection, AP discovery, and channel switching speed on an (apparently two-node) prototype deployment. They evaluated their channel selection algorithm in simulation and showed that it appeared to perform close the optimum in the presence of simulated background traffic, spatial variation.

Disappointingly, the authors don't appear to evaluate the effects their channel switching will have on end users. On their only experiment on the real testbed with this issue (Fig. 14) shows that there appears to be some serious throughput penalty when channel switching must occur, and from their evaluation it is not clear what effect that would likely have on transport protocols.

Wednesday, October 14, 2009

XORs in The Air: Practical Wireless Network Coding

This paper presents a routing and MAC scheme (COPE) that attempts to increase wireless throughput by using a coding techniques to send multiple packets at once. When a router has set of packets to forward to multiple destinations and each destination has all but one packet in a set, it can send the XOR of all the packets in the set and each destination can decode it. To allow routers to use this feature, they discover what packets their neighbors have available based on periodic reception reports and by guessing from overheard packets and transmission probabilities tracked by routing.

The authors evaluated COPE in building ad-hoc 802.11a testbed. With UDP flows between random pairs of hosts (with flow sizes corresponding to those from some Internet file sample), they observed approximately double effective throughput at high levels of network utilziation and very little improvement at low levels of network utilization.

With TCP flows in the same test bed, performance was considerably worse. The authors attributed this problem to TCP not recovering from unavoidable collisions losses due to hidden terminals that MAC layer retries often did not recovery from; they achieved an approximately 40% performance when they artificially eliminated hidden terminals (by placing hidden terminals within carrier-sense range of all other nodes but artificially restricting the offered paths) and sufficiently loaded the network. (The authors went to some extra effort to make TCP flows work well — for example, avoiding packet reordering though they did not evaluate the effect of this improvement.)

Though ordinarily random pair connection patterns seem like a good way to stress the network, it would seem that in this case, it's likely to make the coding scheme look better than it may be for many practical situations. It performs especially well when there are flows that cross each other on the network, but many common applications are asymmetric (e.g. web browsing-like patterns would likely have most traffic originating from a small number of gateways with relatively little traffic to code against in the opposite direction).

ExOR: Opportunistic Multi-Hop Routing for Wireless Networks

ExOR is a routing protocol that attempts to exploit ‘lucky’ delivery of packets along multiple lossy (hopefully in an uncorrelated way) hops from each router on the path. Each sent batch of packets is broadcast with an ordered list of next-hops. When a router receives a packet, it waits an amount of time according to where it is in the priority list and the sending rate in its environment and forwards packets it did not overhear from nodes higher in the priority list. The nodes on the ordered list are sorted by each nodes ETX to the destination and pruned based on which nodes which are used well enough in a simulation of ExOR. ExOR only attempts to transmit 90% of the packets in a batch, on the assumption that the last packets in a batch will be more expensive to send due to a higher liklihood of collisions (due to insufficient information to schedule packet sending well). The remaining packets are expected to be transmitted through traditional hop-by-hop metrics (though the authors did not appear to do this in their evaluation).

The authors evaluated ExOR on RoofNet comparing its efficiency to the RoofNet's routing with the ETX metric. The authors measured throughput compared to traditional routing (presumably with the ETX metric). They sent 10% more traffic when testing ExOR to compensate for the reduced delivery rate. Even accounting for this extra traffic, the authors found substantially better throughput than the ‘traditional’ routing for multi-hop paths and especially for the poorest routes under traditional routing.

Based on the evaluation, it appears that this protocol targets bulk transfers. The authors did not evaluate latency, and appear to expect users to run a transport protocol unlike TCP, since this technique will certainly introduce jitter and/or out-of-order delivery and/or losses. The authors implemented a web proxy, only using ExOR for 100KB or larger transfers, but unfortunately the paper does not evaluate that proxy's actual overhead.

Wednesday, October 7, 2009

A Performance Comparision of Multi-Hop Wireless Ad Hoc Networking Routing Protocols

There are several choices to be made in constructing ad-hoc routing protocols, several of which are explored by the protocols in this paper:
  • route construction time — on demand (AODV, DSR, TORA) or always (DSDV)
  • loop avoidance mechanism — sequence number (DSDV, AODV), source routing (DSR), topological ordering + drop packets during link reversal (temporary routing loop) (TORA)
  • route storage — next hop table (DSDV, AODV), topological ordering (“height”) (TORA), at source/on data packets (DSR)
Additionally, these algorithms need to choose various timeouts and rates and delays (to avoid accidental synchronization) for link state and route discovery messages as well as route propagation policies.

The authors of this paper implemented DSDV (with two different route propagation policies), AODV (with explicit beacons and with on-send loss detection), DSR, TORA in a detailed simulation of a 802.11 network with RTS/CTS. They choose several simulations scenarios with varying degrees of mobility (both in speed and movement frequencies) randomly in the simulated space. They used several fixed-rate packet sources (between 10, 20, or 30 random pairs of hosts) with sufficiently small packet sizes to usually avoid congestion losses. Given these scenarios, they measured the delivery rate, routing overhead (in bytes and packets), and route length versus the optimal path.

They found that routing protocol efficiency greatly depended upon the degree of mobility: faster, more frequent movement resulted in substantially larger overheads for all but DSDV. Except for TORA at high packet rates (where it experienced a routing protocol congestion collapse) and DSDV with high mobility (where its periodic updates were not fast enough), performance was good overall: more than 95% of packets were delivered and most packets took the shortest route. In terms of routing overhead, results were not surprising (except for TORA, which seems to be clearly broken), demand routing protocols (except for TORA) did substantially better and DSR seemed to trade off increased byte usage for decreased packet usage.

It's good to see a careful simulation of these various routing protocols under enough circumstances that it should be difficult to claim that an advantageous situation has been missed. Of course, the downside of a big sensitivity analysis like this is that reading the paper, I can't really tell how the choice of protocol would matter for a realistic application of an ad-hoc network. Except for the obviously bad cases (e.g. most of TORA, DSDV with very high mobility), would these differences (in overheads, delivery rates) even have a substantial effect on application performance?

A High-Throughput Path Metric for Multi-Hop Wireless Routing

This paper describes and evaluates the routing metric that was ultimately used in RoofNet based on an office environment deployment. The metric assumes hop-by-hop acknowledgments and retransmissions for unicast traffic. For each hop, the expected number of transmissions required to successfully send a packet along that hop is computed based upon the success rate of broadcasted probes (assumed to not use retransmission) in both directions. The authors believe that this metric will tend to find higher bandwidth paths better than merely counting hops and lower the overall load on the network. The authors recognize that their scheme assumes that variable transmission power would not be very advantageous (which seems strange given that their radios can definitely vary their power). Their scheme also seems specialized for their (common in wireless networks of this sort) half-duplex environment where a forwarded transmission interferes with the original sender so ‘pipelining’ is not possible.

The authors evaluated their deployment with a 802.11b network deployed in offices in their lab. They measured the hop-by-hop loss rates and computing a baseline ‘best’ set of route throughputs (assuming that their model of interference along a multi-hop route based on loss rates was accurate). They compared these throughputs to actual throughputs obtained using (modified to avoid instabilities) DSDV and DSR with both the standard hop-count metric and with transmission count metric.

Not surprisingly, the authors observed improved performance with their metric. They identified some non-obvious sources of the improvement which could be used independently by DSDV and DSR: avoiding asymmetric routes (which can be done by ‘handshaking’ before considering a link usable) and usually avoiding routes that experience transmission errors (which DSR does when given link-layer feedback).

The authors also evaluated the accuracy of their link quality measurements. They found that significant bias was introduced by the packet sizes of their probes — larger packets were received significantly less often than smaller ones, but their metric assumed all packets were equal.

Monday, October 5, 2009

Architecture and Evaluation of an Unplanned 802.11b Mesh Network

This paper evaluates the performance of a mesh network deployed in Cambridge, Massachusetts. The network had access points in locations that were convenient for the ‘community’ participants to used and routed between them using a link-state protocol. Using broadcasted probes, loss rate of every link was evaluated at each of the 802.11b supported bit rate (also taking into account the loss rate for ACKs on the reverse flow). From this information, routes (including the 802.11b bitrate at each hop) are chosen to maximize the end-to-end throughput across the network. Experimental evidence showed that the bandwidth prediction from the probes slightly overestimated multihop routes due to interference.

The authors showed that this network did, in fact, benefit in both connectivity and bandwidth from making extensive use of multihop paths because using several shorter links provided much better throughput (for TCP connections) than the (typically longer) paths with a lower hop count. A vast majority of nodes in the network had more than two neighbors in the routed network. Though the link quality (in terms of achievable throughput) was correlated with physical distance, there were plenty of relatively short links with low quality and some highly beneficial long links with high quality.

The authors notably evaluated (802.11b) RTS/CTS versus in their environment. Even though they had observed self-interference when a flow spanned multiple hops, RTS/CTS did not improve throughput.

As a case for creating wireless mesh networks without planning, this paper's result are disappointing. In the multi-hop TCP test 10% could not communicate (despite the nodes of the network being physically stationary), multi-hop routes were much slower than single-hop routes (even considering the penalty of switching off), and adding nodes at random appeared (from simulations) to have seriously diminishing returns.

Modeling Wireless Links for Transport Protocols

This paper characterizes several types of wireless links regarding how they may effect wireless links. The paper seems to be primarily motivated by poor models used for wireless links in prior work. Some areas for serious modelling errors:
  • losses — most link-layer protocols use forward error correction with a limit on retries, so loss rates due to errors aren't very high (as often assumed) except in very poor radio conditions. More common, the paper claims, are bursty losses from handoffs.
  • delay variation — contrary to the default of many simulators, delay jitter is frequently high due to error recovery and handovers and link scheduling (e.g. in cellular links) can cause delay spikes

  • not duplex
  • traffic models — consider multiple flows and flows including more than the wireless link
The paper gives suggestions on how to model these and some other effects (e.g. queue management, resource allocation, bandwidth variation, asymmetry) and provides some examples of how varying these parameters of the model can drastically affect the performance of transport protocols. The paper's value really does not seem to be so much in its evaluation of model differences (though they show some non-obvious results: e.g., buffer size can have a substantial effect on transport protocol behavior) but in providing a checklist of mostly simple mistakes to avoid.