交换的类型
Switching can occur at four levels, which are listed here in order of increasing performance:
• Process switching. With this type of switching, an incoming packet is associated with a
destination network or subnet entry in the routing table located in main memory. Process
switching is a scheduled process that is performed by the system processor.
• Fast switching. With this type of switching, an incoming packet matches an entry in the
fast-switching cache located in main memory. Fast switching is done via asynchronous interrupts,
which are handled in real time. Fast switching allows higher throughput by switching a packet
using a cache created by previous packets.
• Autonomous switching. With this type of switching, an incoming packet matches an entry in the
autonomous-switching cache located on the interface processor. Autonomous switching provides
faster packet switching by allowing the ciscoBus controller to switch packets independently
without having to interrupt the system processor. It is available only on Cisco 7000 series routers
and in AGS+ systems with high-speed network controller cards.
• SSE switching. With this type of switching, an incoming packet matches an entry in the
silicon-switching cache located in the silicon switching engine (SSE) of the Silicon Switch
Processor (SSP) module. This module is available only on Cisco 7000 series routers. Silicon
switching provides very fast, dedicated packet switching by allowing the SSE to switch packets
independently without having to interrupt the system processor.
Process Switching
Process switching is the slowest and most processor-intensive of the routing types. When a packet arrives on an interface to be forwarded, it is copied to the router's process buffer, and the router performs a lookup on the Layer 3 address. Using the route table, an exit interface is associated with the destination address. The processor encapsulates and forwards the packet with the new information to the exit interface. Subsequent packets bound for the same destination address follow the same path as the first packet.
The repeated lookups performed by the router's processor and the processor's relatively slow performance eventually create a bottleneck and greatly reduce the capacity of the router. This becomes even more significant as the bandwidth and number of interfaces increase and as the routing protocols demand more processor resources.
Fast Switching
Fast switching is an improvement over process switching. The first packet of a new session is copied to the interface processor buffer. The packet is then copied to the CxBus (or other backplane technology as appropriate to the platform) and sent to the switch processor. A check is made against other switching caches (for example, silicon or autonomous) for an existing entry.
Fast switching is then used because no entries exist within the more efficient caches. The packet header is copied and sent to the route processor, where the fast-switching cache resides. Assuming that an entry exists in the cache, the packet is encapsulated for fast switching and sent back to the switch processor. Then the packet is copied to the buffer on the outgoing interface processor, and ultimately it is sent out the destination interface.
Fast switching is on by default for lower-end routers like the 4000/2500 series and may be used on higher-end routers as well. It is important to note that diagnostic processes sometimes require reverting to process switching. Fast-switched packets will not traverse the route processor, which provides the method by which packets are displayed during debugging. Fast switching may also be inappropriate when bringing traffic from high-speed interfaces to slower ones—this is one area where designers must understand not only the bandwidth potential of their links, but also the actual flow of traffic.
Fast switching guarantees that packets will be processed within 16 processor cycles. Unlike process-switched packets, the router's processor will not be interrupted to facilitate forwarding.
Autonomous Switching
Autonomous switching is comparable to fast switching. When a packet arrives on the interface processor, it checks the switching cache closest to it— the caches that reside on other processor boards. The packet is encapsulated for autonomous switching and sent back to the interface processor. The packet header is not sent to the route processor. Autonomous switching is available only on AGS+ and Cisco 7000 series routers that have high-speed controller interface cards.
Silicon Switching
Silicon switching is available only on the Cisco 7000 with an SSP (Silicon Switch Processor). Silicon-switched packets are compared to the silicon-switching cache on the SSE (Silicon Switching Engine). The SSP is a dedicated switch processor that offloads the switching process from the route processor, providing a fast-switching solution. Designers should note that packets must still traverse the backplane of the router to get to the SSP, and then return to the exit interface. NetFlow switching (defined below) and multilayer switching are more efficient than silicon switching.
Optimum Switching
Optimum switching follows the same procedure as the other switching algorithms. When a new packet enters the interface, it is compared to the optimum-switching cache, rewritten, and sent to the chosen exit interface. Other packets associated with the same session then follow the same path. All processing is carried out on the interface processor, including the CRC (cyclical redundancy check). Optimum switching is faster than both fast switching and NetFlow switching, unless you have implemented several access lists.
Optimum switching replaces fast switching on high-end routers. As with fast switching, optimum switching must be turned off in order to view packets while troubleshooting a network problem. Optimum switching is the default on 7200 and 7500 routers.
Distributed Switching
Distributed switching occurs on the VIP (Versatile Interface Processor) cards, which have a switching processor onboard, so it's very efficient. All required processing is done right on the VIP processor, which maintains a copy of the router's routing cache. With this arrangement, even the first packet needn't be sent to the route processor to initialize the switching path, as it must with the other switching algorithms. Router efficiency increases as more VIP cards are added.
It is important to note that access lists cannot be accommodated with distributed switching.
NetFlow Switching
NetFlow switching is both an administrative tool and a performance-enhancement tool that provides support for access lists while increasing the volume of packets that can be forwarded per second. It collects detailed data for use with circuit accounting and application-utilization information. Because of all the additional data that NetFlow collects (and may export), expect an increase in router overhead—possibly as much as a five-percent increase in CPU utilization.
NetFlow switching can be configured on most interface types and can be used in a switched environment. ATM, LAN, and VLAN (virtual LAN) technologies all support NetFlow switching.
NetFlow switching does much more than just switching—it also gathers statistical data, including protocol, port, and user information. All of this is stored in the NetFlow switching cache, according to the individual flow that's defined by the packet information (destination address, source address, protocol, source and destination port, and incoming interface).
The data can be sent to a network management station to be stored and processed. The NetFlow switching process is very efficient: An incoming packet is processed by the fast- or optimum-switching process, and then all path and packet information is copied to the NetFlow cache. The remaining packets that belong to the flow are compared to the NetFlow cache and forwarded accordingly.
The first packet that's copied to the NetFlow cache contains all security and routing information, and if an access list is applied to an interface, the first packet is matched against it. If it matches the access-list criteria, the cache is flagged so that the remaining packets in the flow can be switched without being compared to the list. (This is very effective when a large amount of access-list processing is required.) 文章录入:csh 责任编辑:csh