Thursday, December 17, 2009

Internet Congestion Control

The area of Internet congestion control was formed in 1986-1987 when the then ARPANET suffered ‘congestion collapse’. Congestion collapse had been predicted by Nagel in 1984. Congestion collapse occurs when mounting levels of traffic result in high packet loss inside the network, such that few or no packets are actually delivered to their destination, yet each link is highly loaded.
The initial response to ARPANET’s congestion collapse problem was to increase the capacity of the network. This helped temporarily, but the ARAPNET continued to suffer congestion collapses until a strategy to control the load of packets entering the network was developed. In 1988 Van Jackson enhanced the famous Transport control protocol (TCP) so that the transmission rate was responsive to the level of network congestion. TCP was made to reduce the rate of transmission of hosts when it sensed the network load was nearing congestion collapse. Since the introduction of this enhanced TCP, congestion collapse did not reoccur.
This history of the Internet reflects the two fundamental approaches to the problem of controlling congestion in networks 1) Capacity Provisioning and 2) Load control. Since Congestion collapse occurs when the load of packets placed onto the network exceeds the network’s capacity to carry the packets, the capacity provisioning approach is to ensure that there is enough capacity to meet the load. The load control approach is to ensure that the load of packets placed onto the network is within the capacity of the network. Capacity provisioning is achieved either by accurate performance analysis and traffic modeling, or the brute force approach of over provisioning. There is a range of load control strategies for networks, from connection admission control schemes through to best-effort flow control as on the Internet.
1. Congestion control principles.
1.1 What is congestion?
1.2 Congestion collapse.
1.3 Controlling congestion: design considerations.
1.4 Implicit feedback.
1.5 Source behaviour with binary feedback.
1.6 Stability.
1.7 Rate-based versus window-based control.
1.8 RTT estimation.
1.9 Traffic phase effects.
1.10 Queue management.
1.11 Scalability.
1.12 Explicit feedback.
1.13 Special environments.
1.14 Congestion control and OSI layers.
1.15 Multicast congestion control.
1.16 Incentive issues.
1.17 Fairness.
1.18 Conclusion.
2. Present technology.
2.1 Introducing TCP.
2.2 TCP window management.
2.3 TCP RTO calculation.
2.4 TCP congestion control and reliability.
2.5 Concluding remarks about TCP.
2.6 The Stream Control Transmission Protocol (SCTP).
2.7 Random Early Detection (RED).
2.8 The ATM‘Available Bit Rate’ service.

Internet Congestion

Internet refers to network of networks, so the network congestion is the situation in which an increase in data transmission than the network devices (routers and switches) can accommodate. This results in a proportionately reductions, in throughput.

Throughput is the amount of data that passes through the network per unit of time, such as the number of packets per second.

Packets are the fundamental unit of data transmission on the internet and all other TCP/IP networks, including most LANs.

For this we need buffer, buffer is a portion of a device’s memory that is set aside as a temporary holding place for data that is being sent to or received from another device. This ca result in delayed of lost packets, thus causing applications to retransmit the data, thereby adding more traffic and further increasing the congestion.

Congestion Collapse is the situation in which the congestion becomes so great that throughput drops to a low level and thus little useful communication occurs. It can be stable state with the same intrinsic load level that would by itself not produce congestion. This is because it is caused by the aggressive retransmission used by carious network protocols to compensate for the packet loss that occurs as a result of congestion, a retransmission that continues even after the load is reduced to a level that would not have induced congestion by itself.

Congestion Control

It is the process that is used to reduce congestion in a network. This includes making decisions such as: deciding when to accept new TRAFFIC, when to delete packets and when to adjust the ROUTING policies used in a network.

Network congestion is somewhat analogues to road congestion. One technique that has been used with some success to deal with road congestion is monitoring, in which rate of vehicles entering a road or area is restricted by signals.

The area of Internet congestion control was baptised in 1986-1987 when the then ARPANET suffered ‘congestion collapse’. Congestion collapse had been predicted by Nagel [89] in 1984. Congestion collapse occurs when mounting levels of traffic result in high packet loss inside the network, such that few or no packets are actually delivered to their destination, yet each link is highly loaded.

Various techniques have likewise been developed in attempt to minimize congestion collapse in communications networks.

1) Load control mechanisms

When the capacity available is less than the demand for capacity, load control is the critical element which determines how many packets are allowed onto each link of the network, who gets to send them and when.

At one end of the spectrum are the connection admission control (CAC) schemes, such as the Resource Reservation Protocol (RSVP) [4]. Such schemes require the network to maintain information about each connection and arbitrate whether connections are admitted or rejected so that the connections that are admitted can be absolutely guaranteed their required bandwidth for the duration of the connection. When the load of requested connections is increased beyond the capacity of the network then some new users will be rejected in order to maintain the bandwidth guarantees made to already admit users. CAC is good for honoring bandwidth supply contracts that specify minimum rates.

2) Rethinking Best-Effort Networks

To introduce Kelly’s framework for describing best-effort networks, we will provide an example of how bandwidth allocation in a best effort network compares to bandwidth allocation in a CAC network. Let us consider an example where there are three users each requesting a 1 Mb/s connection across the same 2 Mb/s link. In a CAC network, one user will have to miss out. However, the best-effort network makes this situation less rigid by making the users’ demand for bandwidth elastic. When users do not need strict guarantees of minimum bandwidth then blocking one user is not necessarily the best solution possible. Let us assume that a user is able to quantify, by a single number, its perceived quality of service (QoS) value of sending at a certain rate. Say, transmitting at a rate of 1 Mb/s gives the maximum possible user perceived QoS value, but transmitting at a rate of less than 1 Mb/s still gives some, but less, QoS value to the user. Then, it is possible to conceive of a solution, where by making the three users transmit at 2/3 Mb/s each, the sum of the perceived QoS values of all three users is greater than the sum if only two users are allowed to transmit at the maximum 1 Mb/s and one user is blocked. In such a system, where the user demand has some flexibility, it is possible to achieve a compromise solution for sharing the available capacity which is better for the QoS of the whole community of users, despite giving less capacity to some users. This is exactly the solution that the best-effort network achieves.

3) Supply Demand Pricing

Fix the price as pay per bit.

4) Differentiated Bandwidth

Differentiate bandwidth according to the demand and price rule. More demand more price less demand less price.

No comments:

Post a Comment

Search This Blog