|
 |
Research Topics:
-
-
-
-
-
-
-
-
-
-
Distributed multiuser MIMO in enterprise WiFi networks
-
Mobility-assisted communication: Efficient routing in intermittently connected networks, e.g. mobile ad-hoc, delay tolerant and disruptive tolerant networks
Delay and disruptive tolerant networks (sometimes also referred to as intermittently connected mobile networks) are networks where most of the time, there does not exist a complete end-to-end path from the source to the destination. Even if such a path exists, it may be highly unstable because of the topology changes due to mobility and may change or break soon after it has been discovered. This situation arises when the network is quite sparse, in which case it can be viewed as a set of disconnected, time varying cluster of nodes. Examples of such networks include vehicular ad hoc networks, sensor networks for wildlife tracking and habitat monitoring, military networks, deep-space inter-planetary networks, nomadic communities networks, networks of mobile robots, underwater networks etc.
 |
 |
 |
 |
Mobility-assisted or store, carry and forward model of routing.
|
Traditional mobile ad hoc routing protocols will fail for these networks because they require the existence of complete end-to-end paths to be able to deliver any data. To overcome this issue, mobility-assisted routing schemes have been proposed that often make a mobile node store and carry a message around, until an appropriate communication opportunity arises. We have studied a range of representative mobility-assisted routing strategies, including randomized strategies, utility-based strategies, schemes using multiple message copies, and hybrid approaches. We have proposed an analytical framework which allows us to study and compare all these schemes in a realistic network. Key features of our analytical framework include the use of realistic mobility models, which allow nodes to behave differently from other nodes and to preferentially move in some local areas, and the complete analysis of network contention caused by limited bandwidth and interference.
Based on our studies, using flooding-based ideas result in severe contention, whereas using a single copy per message yields large delays. In contrast, we find that using a small, fixed number of copies, and then routing each copy independently, yields surprisingly good performance under a wide range of scenarios. Clearly, to achieve a desirable performance, one needs to intelligently decide how many copies to distribute, how to choose good relay nodes for the copies, and how to efficiently route each copy in an independent fashion using the information available locally to each node.
Related Publications
Recent funding sources:
National Science Foundation - NeTS
METRANS Transportation Center
Zumberge Foundation
|
-
Scheduling, rate and congestion control for multi-hop wireless networks, e.g. mesh and sensor networks
 |
Congestion is a neighborhood, not a single link affair.
|
We are interested in the fundamental performance limits of static multi-hop wireless (mesh or sensor) networks, an emerging networking architecture that provides community networking, distributed sensing, support for medical applications, and Internet access in locations like airports, convention centers, education facilities, and hospitals. Mesh networks pose a challenge that does not exist in traditional wireline networks like the Internet: neighboring links interfere with each-other in complex ways that have detrimental effects on performance. One way to address this issue is to design medium access schemes that carefully schedule the various competing links. But, it is well know that such schedulers require a lot of computation and a centralized implementation. As a result, random access schedulers have become the de facto standard used in all deployments.
Achieving efficiency over random access schedulers requires the transport protocol to be aware of the existing complex interference. With this in mind, we have argued for the need to design clean-slate mechanisms for transporting data that view congestion and rate allocation as a neighborhood, not a single link affair. Specifically, we have designed, analyzed, and implemented a practical, fully distributed congestion and rate control scheme that assigns fair and efficient rates to flows traversing multi-hop networks with tree-like topologies. Further, we have worked on the problem of neighborhood-centric congestion and rate control for arbitrary mesh topologies and related the new designs to fundamental transport approaches and principles. In particular, we have designed, analysed, and implemented both AIMD-based schemes, and explicit rate notification schemes, which are fair, efficient, and amenable to implementation.
Practically speaking, it is unlikely that de facto standard protocols like 802.11 and TCP will be replaced or modified. With this in mind, we have also focused on how to achieve the benefits of neighborhood-centric congestion and rate control without requiring the modification of any part of today's network protocol stack. Our general approach is, as before, to find resource sharing relationships of the wireless channel among wireless links, and then to explicitly allocate the wireless channel resource over the wireless links. However, the additional constraint of requiring no changes to existing technologies and protocols leads us to explore coarse-grained airtime allocation rather than fine-grained airtime allocation that requires modifications of several existing components. Specifically, for a given link, our technique first finds its wireless neighborhood, the set of links whose transmissions would interfere with the given link. Then, it proactively and cooperatively allocates the total airtime available within the wireless neighborhood. Each link is assigned an airtime limit which is proportional to the number of TCP flows traversing it, and performs coarse-grained scheduling. Our preliminary evaluation results show that contention-aware explicit airtime-limiting can significantly improve TCP throughput and fairness in multi-hop wireless mesh networks. Our design can be implemented as a retrofit. As already mentioned, it does not require any modification either to TCP or to the MAC layer.
 |
TCP v.s. our rate controllers, WCP and WCPCap, over 802.11.
|
Even with the best rate allocation scheme, one cannot exceed the performance limits imposed by interference and scheduling inefficiencies. To this end, a central question in the study of multi-hop wireless networks is the following: Given an arbitrary multi-hop topology and a collection of source-destination pairs, what is the achievable rate region of this arbitrary multi-hop topology? Prior works have considered this problem assuming TDMA scheduling. However, the MAC protocol being used in all the deployments is IEEE 802.11, the de facto random access standard. We have introduced an analytical methodology that gives a formal characterization of the achievable rate region of 802.11. Interestingly, under commonly used topologies, the performance loss of 802.11 over perfect (collision free) schedulers is surprisingly low.
Related Publications
Recent funding sources:
National Science Foundation - NeTS
Cisco Systems
QualNet Network Simulator
|
-
Using lower layer techniques, e.g. directional antennas, multiple packet reception, and a low-SNR channel for control plane information, to improve the performance of multi-hop wireless networks
-
Network downscaling: Enabling scalable network performance prediction and analysis
Monitoring the performance of a large system like a popular web server farm or the Internet, and predicting its behavior under novel algorithms, protocols, architectures and load conditions are important research problems. Despite their importance, there are currently no good solutions to these problems. This is not because researchers and industry have ignored them. On the contrary, there has been a lot of related research and industrial products. The problems of measurement and analysis of systems like the Internet are hard because of the systems' overwhelming size, complexity, heterogeneity, and speed of operation. To sidestep these issues, we design scalable, data-driven methods for performance prediction of large systems. First, the overall complexity of the system is reduced by using a sample of the actual traffic to drive a small-scale replica of the original system. Then, the methods attempt to exploit scaling laws to extrapolate from the performance of the scaled system to that of the original. Simulations and theory are used to establish their usefulness.
 |
Physical downscaling and performance extrapolation.
|
We have successfully scaled down the speed of an IP network which is shared by realistic, TCP-like traffic. In particular, we propose methods to sample arriving traffic and slow down the network in a manner that leaves performance measures such as end-to-end flow delays, queueing delays and drop probabilities virtually unchanged. Further, we have successfully downscaled the topology of Internet-like networks as follows: We first create a suitably scaled replica of the original network topology consisting of congested links only. Then, we feed this smaller network with a sample of the original traffic, introduce a number of fixed delays to each packet to take into account the uncongested links, observe the behavior of the network, and extrapolate from its performance to that of the original system. We show that simulating the scaled replica can be up to two orders of magnitude faster than simulating the original network, while the accuracy of performance prediction remains quite high.
We are currently working on extending this idea to wireless networks. Further, we are working on establishing the validity of the downscaling approach via formal proofs that make realistic assumptions about the network traffic and the topology structure. A key component of our analysis lies in studying the behavior of queues under a large number of TCP-flows, and determining when such queues impose insignificant queueing delays to packets, and hence could be ignored.
Related Publications
|
-
Modelling the performance and incentive schemes for peer to peer networks
 |
Performance improvement from the incentive scheme. The horizonal blue
line shows the downlaod rate and the horizontad red line shows the
download delay of the system without the incentive scheme.
|
Peer-to-peer networks have drawn a lot of attention from both academia and industry. This is not a surprise given that they contribute more than half of Internet’s traffic. The performance of peer-to-peer systems depends on the level of cooperation of the system’s participants. While most existing peer-to-peer architectures have assumed that users are generally cooperative, there is great evidence from widely deployed systems suggesting the opposite. To date, many schemes have been proposed to alleviate this problem. However, the majority of these schemes are either too complex to use in practice, or do not provide strong enough incentives for cooperation. What is more, these schemes have been mostly validated via simulations, since large scale experimentation is expensive and mathematical analysis is quite hard. In this research effort we propose a theoretical framework to study the performance of peer-to-peer networks under a family of incentive schemes. We propose efficient practical schemes that reduce the query response times and file download delays by one order of magnitude, and double the system's throughput. Using our framework, we come up with practical formulas that can be used by practitioners to fine-tune their peer-to-peer system to achieve a desirable performance.
Related Publications
|
-
Network flow and congestion control for wired, Internet-like networks
Despite the increasing speeds at the core of the Internet, congestion is still a troubling issue for end-users. Yet, the increasing speeds mean that any congestion control scheme has to be very simple to be used in practice, since the operations that routers can perform to each packet are quite limited. Randomized schemes, if properly designed, can achieve good performance at low implementation cost. We present here two such ideas. First, we have proposed an approximately fair, stateless active queue management scheme for providing a fair bandwidth allocation to flows that share the outgoing link of a congested router. In particular, we have devised a simple packet dropping scheme, called CHOKe, that discriminates against the flows which submit more packets/sec than is allowed by their fair share. Since it is stateless and easy to implement, CHOKe controls unresponsive or misbehaving flows with a minimum overhead. Second, we have introduced a low-complexity scheduler that identifies long and short flows, and uses a low and a high priority queue to provide differentiated service between these two groups of flows. Due to the heavy-tailed nature of the size of the Internet flows, such a scheme leads to tremendous improvements on the average flow delay, without hurting sizably the very large flows. (These research efforts are the result of collaboration with researchers from the Electrical Engineering department at Stanford University.)
Related Publications
|
-
Modeling spatially-correlated sensor network data
The physical phenomena monitored by sensor networks usually yield sensed data that are strongly correlated in space. With this in mind, researchers have designed a large number of sensor network protocols and algorithms that attempt to exploit such correlations. To carefully study the performance of these algorithms and get guidelines for designing more efficient algorithms, we derive a simple and accurate model of spatially correlated sensor network data. The model can capture correlation in data irrespective of the node density, the number of source nodes or the topology. We describe a mathematical procedure to extract the model parameters from real traces and generate synthetic traces using these parameters. Then, we validate our model by statistically comparing synthetic data and experimental data, as well as by comparing the performance of various algorithms whose performance depends on the degree of spatial correlation. Finally, we create a tool that can be easily used by researchers to synthetically generate traces of any size and degree of correlation.
Related Publications
Download Tools
|
-
Web caching, web performance and information management on the web
The enormous popularity of the World Wide Web in recent years has caused a tremendous increase in network traffic due to HTTP requests. We explore ways to improve web's performance via efficient web caching, accurate prediction of web request streams, and techniques to reduce the amount of data that needs to be communicated from the web servers to the web clients.
Along these lines, we propose randomized algorithms for approximating complex web-cache replacement scheme and thereby avoiding the need for complex data structures. This way we benefit from sophisticated replacement policies without maintaining expensive data structures. Further, we obtain a simple and accurate model for sequences of web requests. The model we propose precisely captures the degrees to which time correlations and document popularity influence web trace requests. An important by-product our model is the insight it provides to design efficient replacement and pre-fetching algorithms. Finally, we propose the use of delta-encoding, which is a mechanism that identifies differences between different files, to compress the differences between consecutive versions of dynamic web pages. The idea is to exploit temporal correlation among different snapshots of a dynamic document, and only send from the web server to the web client the differences between the most recent version and the version that the client has downloaded in the past.
Related Publications
|
|
|