MIT researchers develop new network-management system to cut down on data traffic jams

networking computer screens

MIT researchers have created a new network-management system, called Fastpass, that they say cuts down on the long wait times that occur during periods of heavy network congestion. The research team will present its findings during the ACM Special Interest Group on Data Communication conference in mid August.

In a data center, every time a person makes a request, multiple packets of data have to get passed around via a router from one end to the other. When a lot of people are making requests, these data packets can end up getting clogged in the router as the router sets aside the data packets it can’t handle in a queue.

Diagram showing reduced latency

Diagram showing reduced latency

At the heart of the appropriately named Fastpass system is a centralized server called an arbiter. The MIT researchers claim that each time a router or some other network node like a switch or a bridge wants to shoot out data based on a user request, it first passes the request over to the arbiter, which acts as a sort of overseer of all network nodes and requests. Based on the arbiter’s knowledge of the networking system as well as handy timeslot allocation and path assignment algorithms, it can determine the best networking route and time to send the request through in order to prevent a data packet pileup.

An excerpt from the MIT research paper describes the technical aspects of the Fastpass system:

Endpoints communicate with the arbiter using the Fastpass Control Protocol (FCP). FCP is a reliable protocol that conveys the demands of a sending endpoint to the arbiter and the allocated timeslot and paths back to the sender. FCP must balance conflicting requirements: it must consume only a small fraction of network bandwidth, achieve low latency, and handle packet drops and arbiter failure without interrupting endpoint communication. FCP provides reliability using timeouts and ACKs of aggregate demands and allocations. Endpoints aggregate allocation demands over a few microseconds into each request packet sent to the arbiter. This aggregation reduces the overhead of requests, and limits queuing at the arbiter.

The MIT team apparently tested out Fastpass in a Facebook data center and found that the average queue length of the routers was cut down by 99.6 percent. Even during periods of heavy network traffic, the time it took for a request to be sent and retrieved was reduced from 3.56 microseconds to 0.23 microseconds.

You're subscribed! If you like, you can update your settings

loading

Comments have been disabled for this post