This part explores some fundamental issues concerningthe future of networks.
The main topics covered are:
- Virtual networks
- Traffic audio / video over IP networks
- Quality of service
- Characteristics of the route architturali
- Memory management and queues / buffers in network equipment
The routers are third generation, like the previous ones, having linecard, even if they are connected together by means of a Switching Fabric, capable of managing multiple transfers. The problems of this architecture are:
-Access to the memory, since the operation is more expensive in terms of time, as the packets arrive at a frequency greater than the frequency of storage. The only way to avoid losing packets is to use memories (SRAM) with a very high parallelism, so as to allow storage of the bits of the same package in parallel.
-Processing, in the sense that it is difficult to schedule exceptions or add features. It is not very flexible.
The structure of a switching fabric is in a matrix. On each intersection point is a switch that puts into correspondence the wires horizontal and vertical ones. In this way you can join linecard No, unless they want to talk about the same time with the same linecard. This matrix allows an improvement in performance. To manage the collision is necessary to have an arbitrator, which drives the switching points of the switching fabric.
Queuing in the switching fabric
There are several solutions to manage queues. The first, chronologically, was "output queuing", which provides that the packets are queued at the exit from the card. This implies that the switching fabric must be much faster than the input interface. Having N inputs the speed of the switching fabric must be N times faster than the speed of reception. In addition also the buffers must have this speed.
A second solution is that of "Input queuing," which has an input buffer and an umpire who will "stop" the incoming data to the switching fabric.
A third system, the best one, is that of "virtual output queuing", which solves the problem of head of line blocking. In fact, for every input, the queues are divided according to destinations. This greatly complicates the size of the hardware and complicates the control system of the referee.
A final system is to put the buffers within the switching fabric. In particular it is put one for each node. This system leads to a high cost, since it greatly complicates the management of Qos.
Evolution of Traffic
You can verify that traffic is doubling every year.According to Moore's law, the speed of processing is growing at a rate of two times every 18 months. The speed of the memory access is instead the most critical factor in that it grows by 0.1 times every 18 months.Obviously these data can not be certain, since they rely on forecasts. This does show, however, that the improvement of memory, it is much lower than that of the growth in traffic. This will lead to a memory problem in the future.
The main problems are:
-Processors, router CPU do best, to give a greater chance of implementation.
-Memory, due to high access times. Caches are not usable, because the cache miss is not tolerable. Recently there has tried to introduce cache is not deterministic, so as to ensure for the most packets shorter times and for a small part of a longer time. A further problem is the rise of the line speed, which cause an increase of the buffering capacity.
-New distributed architectures. Possibility of dividing the processing of more elementary blocks distributed
-Consumption and heat dissipation. In truth, since we can no longer dissipate the heat devices make "greener".