Tuesday, October 13, 2009

STP

5.0.1 Introduction

It is clear that computer networks are critical components of most small- and medium-sized businesses. Consequently IT administrators have to implement redundancy in their hierarchical networks. However adding extra links to switches and routers in the network introduces traffic loops that need to be managed in a dynamic way; when a switch connection is lost, another link needs to quickly take its place without introducing new traffic loops. In this chapter you will learn how spanning-tree protocol (STP) prevents loop issues in the network and how STP has evolved into a protocol that rapidly calculates which ports should be blocked so that a VLAN-based network is kept free of traffic loops.


5.1 Redundant Layer 2 Topologies

5.1.1 Redundancy
Redundancy in a hierarchical network


STP is enabled on all switches. STP is the topic of this chapter and will be explained at length. For now, notice that STP has placed some switch ports in forwarding state and other switch ports in blocking state. This is to prevent loops in the Layer 2 network. STP will only use a redundant link if there is a failure on the primary link.

Redundancy provides a lot of flexibility in path choices on a network, allowing data to be transmitted regardless of a single path or device failing in the distribution or core layers. Redundancy does have some complications that need to be addressed before it can be safely deployed on a hierarchical network.


5.1.1 - Redundancy
5.1.2 Issues with Redundancy

Layer 2 Loops

Redundancy is an important part of the hierarchical design. Although it is important for availability, there are some considerations that need to be addressed before redundancy is even possible on a network.

When multiple paths exist between two devices on the network and STP has been disabled on those switches, a Layer 2 loop can occur. If STP is enabled on these switches, which is the default, a Layer 2 loop would not occur.

Ethernet frames do not have a time to live (TTL) like IP packets traversing routers. As a result, if they are not terminated properly on a switched network, they continue to bounce from switch to switch endlessly or until a link is disrupted and breaks the loop.

Broadcast frames are forwarded out all switch ports, except the originating port. This ensures that all devices in the broadcast domain are able to receive the frame. If there is more than one path for the frame to be forwarded out, it can result in an endless loop.


Broadcast Storms

A broadcast storm occurs when there are so many broadcast frames caught in a Layer 2 loop that all available bandwidth is consumed. Consequently, no bandwidth is available bandwidth for legitimate traffic, and the network becomes unavailable for data communication.

A broadcast storm is inevitable on a looped network. As more devices send broadcasts out on the network, more and more traffic gets caught in the loop, eventually creating a broadcast storm that causes the network to fail.

There are other consequences for broadcast storms. Because broadcast traffic is forwarded out every port on a switch, all connected devices have to process all broadcast traffic that is being flooded endlessly around the looped network. This can cause the end device to malfunction because of the high processing requirements for sustaining such a high traffic load on the network interface card.

Because devices connected to a network are constantly sending out broadcast frames, such as ARP requests, a broadcast storm can develop in seconds. As a result, when a loop is created, the network quickly becomes disabled.

Duplicate Unicast Frames

Broadcast frames are not the only type of frames that are affected by loops. Unicast frames sent onto a looped network can result in duplicate frames arriving at the destination device.

Most upper layer protocols are not designed to recognize or cope with duplicate transmissions. In general, protocols that make use of a sequence-numbering mechanism assume that the transmission has failed and that the sequence number has recycled for another communication session. Other protocols attempt to hand the duplicate transmission to the appropriate upper layer protocol to be processed and possibly discarded.

Fortunately, switches are capable of detecting loops on a network. The Spanning Tree Protocol (STP) eliminates these loop issues. You will learn about STP in the next section.


5.1.3 Real-world Redundancy Issues

Loops in the Wiring Closet

Redundancy is an important component of a highly available hierarchical network topology, but loops can arise as a result of the multiple paths configured on the network. You can prevent loops using the Spanning Tree Protocol (STP). However, if STP has not been implemented in preparation for a redundant topology, loops can occur unexpectedly.

Network wiring for small to medium-sized businesses can get very confusing. Network cables between access layer switches, located in the wiring closets, disappear into the walls, floors, and ceilings where they are run back to the distribution layer switches on the network. If the network cables are not properly labeled when they are terminated in the patch panel in the wiring closet, it is difficult to determine where the destination is for the patch panel port on the network. Network loops that are a result of accidental duplicate connections in the wiring closets are a common occurrence.

Loops in the Cubicles

Because of insufficient network data connections, some end users have a personal hub or switch located in their working environment. Rather than incur the costs of running additional network data connections to the workspace, a simple hub or switch is connected to an existing network data connection allowing all devices connected to the personal hub or switch to gain access to the network.

Wiring closets are typically secured to prevent unauthorized access, so often the network administrator is the only one who has full control over how and what devices are connected to the network. Unlike the wiring closet, the administrator is not in control of how personal hubs and switches are being used or connected, so the end user can accidentally interconnect the switches or hubs.

STP Topology

Redundancy increases the availability of the network topology by protecting the network from a single point of failure, such as a failed network cable or switch. When redundancy is introduced into a Layer 2 design, loops and duplicate frames can occur. Loops and duplicate frames can have severe consequences on a network. The Spanning Tree Protocol (STP) was developed to address these issues.

STP ensures that there is only one logical path between all destinations on the network by intentionally blocking redundant paths that could cause a loop. A port is considered blocked when network traffic is prevented from entering or leaving that port. This does not include bridge protocol data unit (BPDU) frames that are used by STP to prevent loops. You will learn more about STP BPDU frames later in the chapter. Blocking the redundant paths is critical to preventing loops on the network. The physical paths still exist to provide redundancy, but these paths are disabled to prevent the loops from occurring. If the path is ever needed to compensate for a network cable or switch failure, STP recalculates the paths and unblocks the necessary ports to allow the redundant path to become active.

STP prevents loops from occurring by configuring a loop-free path through the network using strategically placed blocking state ports. The switches running STP are able to compensate for failures by dynamically unblocking the previously blocked ports and permitting traffic to traverse the alternate paths. The next topic describes how STP accomplishes this process automatically.


STP Algorithm

STP uses the Spanning Tree Algorithm (STA) to determine which switch ports on a network need to be configured for blocking to prevent loops from occurring. The STA designates a single switch as the root bridge and uses it as the reference point for all path calculations. In the figure the root bridge, switch S1, is chosen through an election process. All switches participating in STP exchange BPDU frames to determine which switch has the lowest bridge ID (BID) on the network. The switch with the lowest BID automatically becomes the root bridge for the STA calculations. The root bridge election process will be discussed in detail later in this chapter.

The BPDU is the message frame exchanged by switches for STP. Each BPDU contains a BID that identifies the switch that sent the BPDU. The BID contains a priority value, the MAC address of the sending switch, and an optional extended system ID. The lowest BID value is determined by the combination of these three fields. You will learn more about the root bridge, BPDU, and BID in later topics.

After the root bridge has been determined, the STA calculates the shortest path to the root bridge. Each switch uses the STA to determine which ports to block. While the STA determines the best paths to the root bridge for all destinations in the broadcast domain, all traffic is prevented from forwarding through the network. The STA considers both path and port costs when determining which path to leave unblocked. The path costs are calculated using port cost values associated with port speeds for each switch port along a given path. The sum of the port cost values determines the overall path cost to the root bridge. If there is more than one path to choose from, STA chooses the path with the lowest path cost. You will learn more about path and port costs in later topics.

When the STA has determined which paths are to be left available, it configures the switch ports into distinct port roles. The port roles describe their relation in the network to the root bridge and whether they are allowed to forward traffic.

Root ports - Switch ports closest to the root bridge. In the example, the root port on switch S2 is F0/1 configured for the trunk link between switch S2 and switch S1. The root port on switch S3 is F0/1, configured for the trunk link between switch S3 and switch S1.

Designated ports - All non-root ports that are still permitted to forward traffic on the network. In the example, switch ports F0/1 and F0/2 on switch S1 are designated ports. Switch S2 also has its port F0/2 configured as a designated port.

Non-designated ports - All ports configured to be in a blocking state to prevent loops. In the example, the STA configured port F0/2 on switch S3 in the non-designated role. Port F0/2 on switch S3 is in the blocking state.

The Root Bridge

Every spanning-tree instance (switched LAN or broadcast domain) has a switch designated as the root bridge. The root bridge serves as a reference point for all spanning-tree calculations to determine which redundant paths to block.

An election process determines which switch becomes the root bridge.

Best Paths to the Root Bridge

When the root bridge has been designated for the spanning-tree instance, the STA starts the process of determining the best paths to the root bridge from all destinations in the broadcast domain. The path information is determined by summing up the individual port costs along the path from the destination to the root bridge.

The default port costs are defined by the speed at which the port operates. In the table, you can see that 10-Gb/s Ethernet ports have a port cost of 2, 1-Gb/s Ethernet ports have a port cost of 4, 100-Mb/s Fast Ethernet ports have a port cost of 19, and 10-Mb/s Ethernet ports have a port cost of 100.

Note: IEEE defines the port cost values used by STP. As newer, faster Ethernet technologies enter the marketplace, the path cost values may change to accommodate the different speeds available. The non-linear numbers accommodate some improvements to the Ethernet standard but be aware that the numbers can be changed by IEEE if needed. In the table, the values have already been changed to accommodate the newer 10-Gb/s Ethernet standard.

Although switch ports have a default port cost associated with them, the port cost is configurable. The ability to configure individual port costs gives the administrator the flexibility to control the spanning-tree paths to the root bridge.

Path cost is the sum of all the port costs along the path to the root bridge. The paths with the lowest path cost become the preferred path, and all other redundant paths are blocked. In the example, the path cost from switch S2 to the root bridge switch S1, over path 1 is 19 (based on the IEEE-specified individual port cost), while the path cost over path 2 is 38. Because path 1 has a lower overall path cost to the root bridge, it is the preferred path. STP then configures the redundant path to be blocked, preventing a loop from occurring.



The BPDU Fields

In the previous topic, you learned that STP determines a root bridge for the spanning-tree instance by exchanging BPDUs. In this topic, you will learn the details of the BPDU frame and how it facilitates the spanning-tree process.

The BPDU frame contains 12 distinct fields that are used to convey path and priority information that STP uses to determine the root bridge and paths to the root bridge.

Roll over the BPDU fields in the figure to learn what they contain.

* The first four fields identify the protocol, version, message type, and status flags.
* The next four fields are used to identify the root bridge and the cost of the path to the root bridge.
* The last four fields are all timer fields that determine how frequently BPDU messages are sent, and how long the information received through the BPDU process (next topic) is retained. The role of the timer fields will be covered in more detail later in this course.


The example in the figure was captured using Wireshark. In the example, the BPDU frame contains more fields than previously described. The BPDU message is encapsulated in an Ethernet frame when it is transmitted across the network. The 802.3 header indicates the source and destination addresses of the BPDU frame. This frame has a destination MAC address of 01:80:C2:00:00:00, which is a multicast address for the spanning-tree group. When a frame is addressed with this MAC address, each switch that is configured for spanning tree accepts and reads the information from the frame. By using this multicast group address, all other devices on the network that receive this frame disregard it.

5.2.2 - STP BPDU
The diagram depicts the BPDU fields and an example of a BPDU frame captured by Wireshark.

BPDU fields:
Field numbers: 1 to 4
Protocol ID (2 bytes): Indicates the type of protocol being used. This field contains the value zero.
Version (1 byte): Indicates the version of the protocol. This field contains the value zero.
Message type (1 byte): Indicates the type of message. This field contains the value zero.
Flags (1 byte): Includes one of the following:
- Topology change (TC) bit, which signals a topology change in the event that a path to the root bridge has been disrupted.
- Topology change acknowledges (TCA) bit, which is set to acknowledge receipt of a configuration message with the TC bit set.

Field numbers: 5 to 8
Root ID (8 bytes): Indicates the root bridge by listing its 2-byte priority followed by its 6-byte MAC address ID. When a switch first boots, the root ID is the same as the bridge ID. However, as the election process occurs, the lowest bridge ID replaces the local root ID to identify the root bridge switch.
Cost of path (4 bytes): Indicates the cost of the path from the bridge sending the configuration message to the root bridge. The path cost field is updated by each switch along the path to the root bridge.
Bridge ID (8 byte): Indicates the priority and MAC address ID of the bridge sending the message. This label allows the root bridge to identify where the BPDU originated, as well as identify the multiple paths from the switch to the root bridge. When the root bridge receives more than one BPDU from a switch with different path costs, it knows that there are two distinct paths and uses the path with the lower cost.
Port ID (2 bytes): Indicates the port number from which the configuration message was sent. This field allows loops created by multiple attached bridges to be detected and corrected.

Field numbers: 9 to 12
Message age (2 bytes): Indicates the amount of time that has elapsed since the root sent the configuration message on which the current configuration message is based.
Max age (2 bytes): Indicates when the current configuration message should be detected. When the message age reaches the maximum age, the switch expires the current configuration and initiates a new election to determine a new root bridge because it assumes that it has been disconnected from the root bridge. By default, this is 20 seconds, but it can be set between 6 and 40 seconds.
Hello time (2 bytes): Indicates the time between root bridge configuration messages. The interval defines how long the root bridge waits between sending configuration message BPDU's. By default, this is 2 seconds, but it can be set between 1 and 10 seconds.
Forward delay (2 bytes): Indicates the length of time that bridges wait before transitioning to a new state after a topology change. If a bridge transitions too soon, all network links might not be ready to change their state and loops can result. By default, this is 15 seconds for each state, but it can be set between 4 and 30 seconds.

5.2.3 Bridge ID

Page 1:

BID Fields

The bridge ID (BID) is used to determine the root bridge on a network. This topic describes what makes up a BID and how to configure the BID on a switch to influence the election process to ensure that specific switches are assigned the role of root bridge on the network.

The BID field of a BPDU frame contains three separate fields: bridge priority, extended system ID, and MAC address. Each field is used during the root bridge election.

Bridge Priority

The bridge priority is a customizable value that you can use to influence which switch becomes the root bridge. The switch with the lowest priority, which means lowest BID, becomes the root bridge (the lower the priority value, the higher the priority). For example, to ensure that a specific switch is always the root bridge, you set the priority to a lower value than the rest of the switches on the network. The default value for the priority of all Cisco switches is 32768. The priority range is between 1 and 65536; therefore, 1 is the highest priority.

Extended System ID

As shown in the example, the extended system ID can be omitted in BPDU frames in certain configurations. The early implementation of STP was designed for networks that did not use VLANs. There was a single common spanning tree across all switches. When VLANs started to become common for network infrastructure segmentation, STP was enhanced to include support for VLANs. As a result, the extended system ID field contains the ID of the VLAN with which the BPDU is associated.

When the extended system ID is used, it changes the number of bits available for the bridge priority value, so the increment for the bridge priority value changes from 1 to 4096. Therefore, bridge priority values can only be multiples of 4096.

The extended system ID value is added to the bridge priority value in the BID to identify the priority and VLAN of the BPDU frame.

You will learn about per VLAN spanning tree (PVST) in a later section of this chapter.

MAC Address

When two switches are configured with the same priority and have the same extended system ID, the switch with the MAC address with the lowest hexadecimal value has the lower BID. Initially, all switches are configured with the same default priority value. The MAC address is then the deciding factor on which switch is going to become the root bridge. This results in an unpredictable choice for the root bridge. It is recommended to configure the desired root bridge switch with a lower priority to ensure that it is elected root bridge. This also ensures that the addition of new switches to the network does not trigger a new spanning-tree election, which could disrupt network communication while a new root bridge is being selected.
Configure and Verify the BID

When a specific switch is to become a root bridge, the bridge priority value needs to be adjusted to ensure it is lower than the bridge priority values of all the other switches on the network. There are two different configuration methods that you can use to configure the bridge priority value on a Cisco Catalyst switch.

Method 1 - To ensure that the switch has the lowest bridge priority value, use the spanning-tree vlan vlan-id root primary command in global configuration mode. The priority for the switch is set to the predefined value of 24576 or to the next 4096 decrement value below the lowest bridge priority detected on the network.

If an alternate root bridge is desired, use the spanning-tree vlan vlan-id root secondary global configuration mode command. This command sets the priority for the switch to the predefined value of 28672. This ensures that this switch becomes the root bridge if the primary root bridge fails and a new root bridge election occurs and assuming that the rest of the switches in the network have the default 32768 priority value defined.


Method 2 - Another method for configuring the bridge priority value is using the spanning-tree vlan vlan-id priority value global configuration mode command. This command gives you more granular control over the bridge priority value. The priority value is configured in increments of 4096 between 0 and 65536.

In the example, switch S3 has been assigned a bridge priority value of 24576 using the spanning-tree vlan 1 priority 24576 global configuration mode command.

Port Roles

The root bridge is elected for the spanning-tree instance. The location of the root bridge in the network topology determines how port roles are calculated. This topic describes how the switch ports are configured for specific roles to prevent the possibility of loops on the network.

There are four distinct port roles that switch ports are automatically configured for during the spanning-tree process.

Root Port

The root port exists on non-root bridges and is the switch port with the best path to the root bridge. Root ports forward traffic toward the root bridge. The source MAC address of frames received on the root port are capable of populating the MAC table. Only one root port is allowed per bridge.

In the example, switch S1 is the root bridge and switches S2 and S3 have root ports defined on the trunk links connecting back to S1.

Designated Port

The designated port exists on root and non-root bridges. For root bridges, all switch ports are designated ports. For non-root bridges, a designated port is the switch port that receives and forwards frames toward the root bridge as needed. Only one designated port is allowed per segment. If multiple switches exist on the same segment, an election process determines the designated switch, and the corresponding switch port begins forwarding frames for the segment. Designated ports are capable of populating the MAC table.

In the example, switch S1 has both sets of ports for its two trunk links configured as designated ports. Switch S2 also has a designated port configured on the trunk link going toward switch S3.

Non-designated Port

The non-designated port is a switch port that is blocked, so it is not forwarding data frames and not populating the MAC address table with source addresses. A non-designated port is not a root port or a designated port. For some variants of STP, the non-designated port is called an alternate port.

In the example, switch S3 has the only non-designated ports in the topology. The non-designated ports prevent the loop from occurring.

Disabled Port

The disabled port is a switch port that is administratively shut down. A disabled port does not function in the spanning-tree process. There are no disabled ports in the example.


Port Roles

The STA determines which port role is assigned to each switch port.

When determining the root port on a switch, the switch compares the path costs on all switch ports participating in the spanning tree. The switch port with the lowest overall path cost to the root is automatically assigned the root port role because it is closest to the root bridge. In a network topology, all switches that are using spanning tree, except for the root bridge, have a single root port defined.

When there are two switch ports that have the same path cost to the root bridge and both are the lowest path costs on the switch, the switch needs to determine which switch port is the root port. The switch uses the customizable port priority value, or the lowest port ID if both port priority values are the same.

The port ID is the interface ID of the switch port. For example, the figure shows four switches. Port F0/1 and F0/2 on switch S2 have the same path cost value back to the root bridge. However, port F0/1 on switch S2 is the preferred port because it has a lower port ID value.

The port ID is appended to the port priority. For example, switch port F0/1 has a default port priority value of 128.1, where 128 is the configurable port priority value, and .1 is the port ID. Switch port F0/2 has a port priority value of 128.2, by default.

Configure Port Priority

You can configure the port priority value using the spanning-tree port-priority value interface configuration mode command. The port priority values range from 0 - 240, in increments of 16. The default port priority value is 128. As with bridge priority, lower port priority values give the port higher priority.

In the example, the port priority for port F0/1 has been set to 112, which is below the default port priority of 128. This ensures that the port is the preferred port when competing with another port for a specific port role.

When the switch decides to use one port over another for the root port, the other is configured as a non-designated port to prevent a loop from occurring.


Port States

STP determines the logical loop-free path throughout the broadcast domain. The spanning tree is determined through the information learned by the exchange of the BPDU frames between the interconnected switches. To facilitate the learning of the logical spanning tree, each switch port transitions through five possible port states and three BPDU timers.

The spanning tree is determined immediately after a switch is finished booting up. If a switch port were to transition directly from the blocking to the forwarding state, the port could temporarily create a data loop if the switch was not aware of all topology information at the time. For this reason, STP introduces five port states. The table summarizes what each port state does. The following provides some additional information on how the port states ensure that no loops are created during the creation of the logical spanning tree.

* Blocking - The port is a non-designated port and does not participate in frame forwarding. The port receives BPDU frames to determine the location and root ID of the root bridge switch and what port roles each switch port should assume in the final active STP topology.
* Listening - STP has determined that the port can participate in frame forwarding according to the BPDU frames that the switch has received thus far. At this point, the switch port is not only receiving BPDU frames, it is also transmitting its own BPDU frames and informing adjacent switches that the switch port is preparing to participate in the active topology.
* Learning - The port prepares to participate in frame forwarding and begins to populate the MAC address table.
* Forwarding - The port is considered part of the active topology and forwards frames and also sends and receives BPDU frames.
* Disabled - The Layer 2 port does not participate in spanning tree and does not forward frames. The disabled state is set when the switch port is administratively disabled.



BPDU Timers

The amount of time that a port stays in the various port states depends on the BPDU timers. Only the switch in the role of root bridge may send information through the tree to adjust the timers. The following timers determine STP performance and state changes:

* Hello time
* Forward delay
* Maximum age


Hello time - The time between each BPDU frame that is sent on a port. By default, this is 2 seconds, but it can be set between 1 and 10 seconds.

Forward delay - The time spent in the listening and learning state. By default, this is 15 seconds for each state, but it can be set between 4 and 30 seconds.

Maximum age - The max age timer controls the maximum length of time a switch port saves configuration BPDU information. By default, this is 20 seconds, but it can be set between 6 and 40 seconds.
Cisco PortFast Technology

PortFast is a Cisco technology. When a switch port configured with PortFast is configured as an access port, that port transitions from blocking to forwarding state immediately, bypassing the typical STP listening and learning states. You can use PortFast on access ports, which are connected to a single workstation or to a server, to allow those devices to connect to the network immediately rather than waiting for spanning tree to converge. If an interface configured with PortFast receives a BPDU frame, spanning tree can put the port into the blocking state using a feature called BPDU guard. Configuring BPDU guard is beyond the scope of this course.

Note: Cisco PortFast technology can be used to support DHCP. Without PortFast, a PC can send a DHCP request before the port is in forwarding state, denying the host from getting a usable IP address and other information. Because PortFast immediately changes the state to forwarding, the PC always gets a usable IP address.

For more information on configuring BPDU guard, see:

http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a008009482f.shtml.

Note: Because the purpose of PortFast is to minimize the time that access ports must wait for spanning tree to converge, it should be used only on access ports. If you enable PortFast on a port connecting to another switch, you risk creating a spanning-tree loop.

To configure PortFast on a switch port, enter the spanning-tree portfast interface configuration mode command on each interface that PortFast is to be enabled.

To disable PortFast, enter the no spanning-tree portfast interface configuration mode command on each interface that PortFast is to be disabled.

To verify that PortFast has been enabled for a switch port, use the show running-config privileged EXEC mode command. The absence of the spanning-tree portfast command in the running configuration for an interface indicates that PortFast has been disabled for that interface. PortFast is disabled on all interfaces by default.


Convergence is an important aspect of the spanning-tree process. Convergence is the time it takes for the network to determine which switch is going to assume the role of the root bridge, go through all the different port states, and set all switch ports to their final spanning-tree port roles where all potential loops are eliminated. The convergence process takes time to complete because of the different timers used to coordinate the process.

To understand the convergence process more thoroughly, it has been broken down into three distinct steps:

Step 1. Elect a root bridge

Step 2. Elect root ports

Step 3. Elect designated and non-designated ports

The remainder of this section explores each step in the convergence process.


5.3.1 - STP Convergence
The diagram depicts the three STP convergence steps.

Step 1: Elect a root bridge.
Step 2: Elect the root ports.
Step 3: Elect the designated and non-designated ports.


5.3.2 Step 1. Electing A Root Bridge

Step 1. Electing a Root Bridge

The first step of the spanning-tree convergence process is to elect a root bridge. The root bridge is the basis for all spanning-tree path cost calculations and ultimately leads to the assignment of the different port roles used to prevent loops from occurring.

A root bridge election is triggered after a switch has finished booting up, or when a path failure has been detected on a network. Initially, all switch ports are configured for the blocking state, which by default lasts 20 seconds. This is done to prevent a loop from occurring before STP has had time to calculate the best root paths and configure all switch ports to their specific roles. While the switch ports are in a blocking state, they are still able to send and receive BPDU frames so that the spanning-tree root election can proceed. Spanning tree supports a maximum network diameter of seven switch hops from end to end. This allows the entire root bridge election process to occur within 14 seconds, which is less than the time the switch ports spend in the blocking state.

Immediately after the switches have finished booting up, they start sending BPDU frames advertising their BID in an attempt to become the root bridge. Initially, all switches in the network assume that they are the root bridge for the broadcast domain. The flood of BPDU frames on the network have the root ID field matching the BID field, indicating that each switch considers itself the root bridge. These BPDU frames are sent every 2 seconds based on the default hello timer value.

As each switch receives the BPDU frames from its neighboring switches, they compare the root ID from the received BPDU frame with the root ID configured locally. If the root ID from the received BPDU frame is lower than the root ID it currently has, the root ID field is updated indicating the new best candidate for the root bridge role.

After the root ID field is updated on a switch, the switch then incorporates the new root ID in all future BPDU frame transmissions. This ensures that the lowest root ID is always conveyed to all other adjacent switches in the network. The root bridge election ends once the lowest bridge ID populates the root ID field of all switches in the broadcast domain.

Even though the root bridge election process has completed, the switches continue to forward their BPDU frames advertising the root ID of the root bridge every 2 seconds. Each switch is configured with a max age timer that determines how long a switch retains the current BPDU configuration in the event it stops receiving updates from its neighboring switches. By default, the max age timer is set to 20 seconds. Therefore, if a switch fails to receive 10 consecutive BPDU frames from one of its neighbors, the switch assumes that a logical path in the spanning tree has failed and that the BPDU information is no longer valid. This triggers another spanning-tree root bridge election.

As you review how STP elects a root bridge, recall that the root bridge election process occurs with all switches sending and receiving BPDU frames simultaneously. Performing the election process simultaneously allows the switches to determine which switch is going to become the root bridge much faster.


Every switch in a spanning-tree topology, except for the root bridge, has a single root port defined. The root port is the switch port with the lowest path cost to the root bridge. Normally path cost alone determines which switch port becomes the root port. However, additional port characteristics determine the root port when two or more ports on the same switch have the same path cost to the root. This can happen when redundant links are used to uplink one switch to another switch when an EtherChannel configuration is not used. Recall that Cisco EtherChannel technology allows you to configure multiple physical Ethernet type links as one logical link.

Switch ports with equivalent path costs to the root use the configurable port priority value. They use the port ID to break a tie. When a switch chooses one equal path cost port as a root port over another, the losing port is configured as the non-designated to avoid a loop.

The process of determining which port becomes a root port happens during the root bridge election BPDU exchange. Path costs are updated immediately when BPDU frames arrive indicating a new root ID or redundant path. At the time the path cost is updated, the switch enters decision mode to determine if port configurations need to be updated. The port role decisions do not wait until all switches settle on which switch is going to be the final root bridge. As a result, the port role for a given switch port may change multiple times during convergence, until it finally settles on its final port role after the root ID changes for the last time.
Step 3. Electing Designated Ports and Non-Designated Ports

After a switch determines which of its ports is the root port, the remaining ports must be configured as either a designated port (DP) or a non-designated port (non-DP) to finish creating the logical loop-free spanning tree.

Each segment in a switched network can have only one designated port. When two non-root port switch ports are connected on the same LAN segment, a competition for port roles occurs. The two switches exchange BPDU frames to sort out which switch port is designated and which one is non-designated.

Generally, when a switch port is configured as a designated port, it is based on the BID. However, keep in mind that the first priority is the lowest path cost to the root bridge and that only if the port costs are equal, is the BID of the sender.

When two switches exchange their BPDU frames, they examine the sending BID of the received BPDU frame to see if it is lower than its own. The switch with the lower BID wins the competition and its port is configured in the designated role. The losing switch configures its switch port to be non-designated and, therefore, in the blocking state to prevent the loop from occurring.

The process of determining the port roles happens concurrently with the root bridge election and root port designation. As a result, the designated and non-designated roles may change multiple times during the convergence process until the final root bridge has been determined. The entire process of electing the root bridge, determining the root ports, and determining the designated and non-designated ports happens within the 20-second blocking port state. This convergence time is based on the 2-second hello timer for BPDU frame transmission and the seven-switch diameter supported by STP. The max age delay of 20 seconds provides enough time for the seven-switch diameter with the 2-second hello timer between BPDU frame transmissions.

STP Topology Change Notification Process

A switch considers it has detected a topology change either when a port that was forwarding is going down (blocking for instance) or when a port transitions to forwarding and the switch has a designated port. When a change is detected, the switch notifies the root bridge of the spanning tree. The root bridge then broadcasts the information into the whole network.

In normal STP operation, a switch keeps receiving configuration BPDU frames from the root bridge on its root port. However, it never sends out a BPDU toward the root bridge. To achieve that, a special BPDU called the topology change notification (TCN) BPDU was introduced. When a switch needs to signal a topology change, it starts to send TCNs on its root port. The TCN is a very simple BPDU that contains no information and is sent out at the hello time interval. The receiving switch is called the designated bridge and it acknowledges the TCN by immediately sending back a normal BPDU with the topology change acknowledgement (TCA) bit set. This exchange continues until the root bridge responds.

For example, in the figure switch S2 experiences a topology change. It sends a TCN to its designated bridge, which in this case is switch D1. Switch D1 receives the TCN, acknowledges it back to switch S2 with a TCA. Switch D1 generates a TCN, and forwards it to its designated bridge, which in this case is the root bridge.

Broadcast Notification

Once the root bridge is aware that there has been a topology change event in the network, it starts to send out its configuration BPDUs with the topology change (TC) bit set. These BPDUs are relayed by every switch in the network with this bit set. As a result, all switches become aware of the topology change and can reduce their aging time to forward delay. Switches receive topology change BPDUs on both forwarding and blocking ports.

The TC bit is set by the root for a period of max age + forward delay seconds, which is 20+15=35 seconds by default.


Cisco Proprietary

Per-VLAN spanning tree protocol (PVST) - Maintains a spanning-tree instance for each VLAN configured in the network. It uses the Cisco proprietary ISL trunking protocol that allows a VLAN trunk to be forwarding for some VLANs while blocking for other VLANs. Because PVST treats each VLAN as a separate network, it can load balance traffic at Layer 2 by forwarding some VLANs on one trunk and other VLANs on another trunk without causing a loop. For PVST, Cisco developed a number of proprietary extensions to the original IEEE 802.1D STP, such as BackboneFast, UplinkFast, and PortFast. These Cisco STP extensions are not covered in this course. To learn more about these extensions, visit: http://www.cisco.com/en/US/docs/switches/lan/catalyst4000/7.4/configuration/guide/stp_enha.html.

Per-VLAN spanning tree protocol plus (PVST+) - Cisco developed PVST+ to provide support for IEEE 802.1Q trunking. PVST+ provides the same functionality as PVST, including the Cisco proprietary STP extensions. PVST+ is not supported on non-Cisco devices. PVST+ includes the PortFast enhancement called BPDU guard, and root guard. To learn more about BPDU guard, visit: http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a008009482f.shtml.

To learn more about root guard, visit: http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a00800ae96b.shtml.

Rapid per-VLAN spanning tree protocol (rapid PVST+) - Based on the IEEE 802.1w standard and has a faster convergence than STP (standard 802.1D). Rapid PVST+ includes Cisco-proprietary extensions such as BackboneFast, UplinkFast, and PortFast.

IEEE Standards

Rapid spanning tree protocol (RSTP) - First introduced in 1982 as an evolution of STP (802.1D standard). It provides faster spanning-tree convergence after a topology change. RSTP implements the Cisco-proprietary STP extensions, BackboneFast, UplinkFast, and PortFast, into the public standard. As of 2004, the IEEE has incorporated RSTP into 802.1D, identifying the specification as IEEE 802.1D-2004. So when you hear STP, think RSTP. You will learn more about RSTP later in this section.

Multiple STP (MSTP) - Enables multiple VLANs to be mapped to the same spanning-tree instance, reducing the number of instances needed to support a large number of VLANs. MSTP was inspired by the Cisco-proprietary Multiple Instances STP (MISTP) and is an evolution of STP and RSTP. It was introduced in IEEE 802.1s as amendment to 802.1Q, 1998 edition. Standard IEEE 802.1Q-2003 now includes MSTP. MSTP provides for multiple forwarding paths for data traffic and enables load balancing. A discussion of MSTP is beyond the scope of this course. To learn more about MSTP, visit: http://www.cisco.com/en/US/docs/switches/lan/catalyst2950/software/release/12.1_19_ea1/configuration/guide/swmstp.html.


5.4.1 - Cisco and STP Variants
The diagram depicts Cisco and other STP variants.
Cisco Proprietary STP Variants.

PVST:
Uses the Cisco proprietary ISL trunking protocol.
Each V LAN has an instance of spanning tree.
Ability to load balance traffic at Layer 2.
Includes the BackboneFast, UplinkFast, and PortFast extensions.
PVST+:
Supports ISL and i e e e 802 dot 1Q trunking.
Supports Cisco proprietary STP extensions.
Adds BPDU guard and Root guard enhancements.
rapid-PVST+:
Based on the i e e e 802 dot 1w standard.
Has faster convergence than 802 dot 1D.

i e e e Standard STP Variants.

RSTP:
Introduced in 1982, it provides faster convergence than 802 dot 1D.
Implements generic versions of the Cisco proprietary STP extensions.
i e e e has incorporated RSTP into 802 dot 1D, identifying the specification as i e e e 802 dot 1D-2004.

MSTP
Multiple V LAN's can be mapped to the same spanning-tree instance.
Inspired by the Cisco Multiple Instances Spanning Tree Protocol (MISTP).
i e e e 802 dot 1Q-2003 now includes MSTP.


5.4.2 PVST+

Page 1:

PVST+

Cisco developed PVST+ so that a network can run an STP instance for each VLAN in the network. With PVST+, more than one trunk can block for a VLAN and load sharing can be implemented. However, implementing PVST+ means that all switches in the network are engaged in converging the network, and the switch ports have to accommodate the additional bandwidth used for each PVST+ instance to send its own BPDUs.

In a Cisco PVST+ environment, you can tune the spanning-tree parameters so that half of the VLANs forward on each uplink trunk. In the figure, port F0/3 on switch S2 is the forwarding port for VLAN 20, and F0/2 on switch S2 is the forwarding port for VLAN 10. This is accomplished by configuring one switch to be elected the root bridge for half of the total number of VLANs in the network, and a second switch to be elected the root bridge for the other half of the VLANs. In the figure, switch S3 is the root bridge for VLAN 20, and switch S1 is the root bridge for VLAN 10. Creating different STP root switches per VLAN creates a more redundant network.


What is RSTP?

RSTP (IEEE 802.1w) is an evolution of the 802.1D standard. The 802.1w STP terminology remains primarily the same as the IEEE 802.1D STP terminology. Most parameters have been left unchanged, so users familiar with STP can rapidly configure the new protocol.

In the figure, a network shows an example of RSTP. Switch S1 is the root bridge with two designated ports in a forwarding state. RSTP supports a new port type. Port F0/3 on switch S2 is an alternate port in discarding state. Notice that there are no blocking ports. RSTP does not have a blocking port state. RSTP defines port states as discarding, learning, or forwarding. You will learn more about port types and states later in the chapter.

RSTP Characteristics

RSTP speeds the recalculation of the spanning tree when the Layer 2 network topology changes. RSTP can achieve much faster convergence in a properly configured network, sometimes in as little as a few hundred milliseconds. RSTP redefines the type of ports and their state. If a port is configured to be an alternate or a backup port it can immediately change to a forwarding state without waiting for the network to converge. The following briefly describes RSTP characteristics:

* RSTP is the preferred protocol for preventing Layer 2 loops in a switched network environment. Many of the differences were informed by Cisco-proprietary enhancements to 802.1D. These enhancements, such as BPDUs carrying and sending information about port roles only to neighboring switches, require no additional configuration and generally perform better than the earlier Cisco-proprietary versions. They are now transparent and integrated in the protocol's operation.
* Cisco-proprietary enhancements to 802.1D, such as UplinkFast and BackboneFast, are not compatible with RSTP.
* RSTP (802.1w) supersedes STP (802.1D) while retaining backward compatibility. Much of the STP terminology remains, and most parameters are unchanged. In addition, 802.1w is capable of reverting back to 802.1D to interoperate with legacy switches on a per-port basis. For example, the RSTP spanning-tree algorithm elects a root bridge in exactly the same way as 802.1D.
* RSTP keeps the same BPDU format as IEEE 802.1D, except that the version field is set to 2 to indicate RSTP, and the flags field uses all 8 bits. The RSTP BPDU is discussed later.
* RSTP is able to actively confirm that a port can safely transition to the forwarding state without having to rely on any timer configuration.


Edge Ports

An RSTP edge port is a switch port that is never intended to be connected to another switch device. It immediately transitions to the forwarding state when enabled.

The edge port concept is well known to Cisco spanning-tree users, because it corresponds to the PortFast feature in which all ports directly connected to end stations anticipate that no switch device is connected to them. The PortFast ports immediately transition to the STP forwarding state, thereby skipping the time-consuming listening and learning stages. Neither edge ports nor PortFast-enabled ports generate topology changes when the port transitions to a disabled or enabled status.

Unlike PortFast, an RSTP edge port that receives a BPDU loses its edge port status immediately and becomes a normal spanning-tree port.

The Cisco RSTP implementation maintains the PortFast keyword using the spanning-tree portfast command for edge port configuration. Therefore making an overall network transition to RSTP more seamless. Configuring an edge port to be attached to another switch can have negative implications for RSTP when it is in sync state because a temporary loop can result, possibly delaying the convergence of RSTP due to BPDU contention with loop traffic.

Link Types

The link type provides a categorization for each port participating in RSTP. The link type can predetermine the active role that the port plays as it stands by for immediate transition to forwarding state if certain conditions are met. These conditions are different for edge ports and non-edge ports. Non-edge ports are categorized into two link types, point-to-point and shared. The link type is automatically determined, but can be overwritten with an explicit port configuration.

Edge ports, the equivalent of PortFast-enabled ports, and point-to-point links are candidates for rapid transition to a forwarding state. However, before the link type parameter is considered, RSTP must determine the port role. You will learn about port roles next, but for now know that:

* Root ports do not use the link type parameter. Root ports are able to make a rapid transition to the forwarding state as soon as the port is in sync.
* Alternate and backup ports do not use the link type parameter in most cases.
* Designated ports make the most use of the link type parameter. Rapid transition to the forwarding state for the designated port occurs only if the link type parameter indicates a point-to-point link.


Point-to-Point Link Type:
This link type is attached to switch ports that are operating in full-duplex mode.
This link connects to a single switch device.

Shared Link Type:
This link type is attached to a port that is operating in half-duplex mode.
The port is connected to a shared media where multiple switches might exist.


5.4.6 RSTP Port States and Port Roles

Page 1:

RSTP Port States

RSTP provides rapid convergence following a failure or during re-establishment of a switch, switch port, or link. An RSTP topology change causes a transition in the appropriate switch ports to the forwarding state through either explicit handshakes or a proposal and agreement process and synchronization. You will learn more about the proposal and agreement process later.

With RSTP, the role of a port is separated from the state of a port. For example, a designated port could be in the discarding state temporarily, even though its final state is to be forwarding. The figure shows the three possible RSTP port states: discarding, learning, and forwarding.

Click the Descriptions button in the figure.

The table in the figure describes the characteristics of each of the three RSTP port states. In all port states, a port accepts and processes BPDU frames.

Click the STP and RSTP Ports button in the figure.

The table in the figure compares STP and RSTP port states. Recall how the ports in the STP blocking, listening and disabled port states do not forward any frames. These port states have been merged into the RSTP discarding port state.


5.4.6 - RSTP Port States and Port Roles
The diagram depicts the three RTSP port states: discarding, learning, and forwarding.

Discarding:
The discarding state occurs in both a stable active topology and during topology synchronization and changes. The discarding state prevents the forwarding of data frames, thus breaking the continuity of a Layer 2 loop.

Learning:
The learning state occurs in both a stable active topology and during topology synchronization and changes. The learning state accepts data frames to populate the MAC table to limit the flooding of unknown unicast frames.

Forwarding:
The forwarding state occurs only in stable active topologies. The forwarding switch ports determine the topology. After a topology change or during synchronization, data frames are forwarded only after a proposal and agreement process.

Comparing STP and RSTP Port States:

Operational Port State: Enabled
STP Port State: Blocking
RSTP Port State: Discarding

Operational Port State: Enabled
STP Port State: Listening
RSTP Port State: Discarding

Operational Port State: Enabled
STP Port State: Learning
RSTP Port State: Learning

Operational Port State: Enabled
STP Port State: Forwarding
RSTP Port State: Forwarding

Operational Port State: Disabled
STP Port State: Disabled
RSTP Port State: Discarding



RSTP Port Roles

The port role defines the ultimate purpose of a switch port and how it handles data frames. Port roles and port states are able to transition independently of each other. Creating the additional port roles allows RSTP to define a standby switch port before a failure or topology change. The alternate port moves to the forwarding state if there is a failure on the designated port for the segment.

VTP Pruning

You do not need more than two redundant links between two nodes in a switched network. However, a configuration shown in the figure is common. Distribution switches are dual-attached to two core switches, switches, C1 and C2. Users on switches S1 and S2 that connect on distribution switches are only in a subset of the VLANs available in the network. In the figure, users that connect on switch D1 are all in VLAN 20; switch D2 only connects users in VLAN 30. By default, trunks carry all the VLANs defined in the VTP domain. Only switch D1 receives unnecessary broadcast and multicast traffic for VLAN 20, but it is also blocking one of its ports for VLAN 30. There are three redundant paths between core switch C1 and core switch C2. This redundancy results in more blocked ports and a higher likelihood of a loop.

Note: Prune any VLAN that you do not need off your trunks.

Manual Pruning

VTP pruning can help, but this feature is not necessary in the core of the network. In this figure, only an access VLAN is used to connect the distribution switches to the core. In this design, only one port is blocked per VLAN. Also, with this design, you can remove all redundant links in just one step if you shut down C1 or C2.


5.4.8 - Designing an STP for Trouble Avoidance
The diagram depicts the benefits of hierarchical design and pruning V LAN's using STP and manual pruning.

Keep STP Even If It Is Unnecessary

Assuming you have removed all the blocked ports from the network and do not have any physical redundancy, it is strongly suggested that you do not disable STP.

STP is generally not very processor intensive; packet switching does not involve the CPU in most Cisco switches. Also, the few BPDUs that are sent on each link do not significantly reduce the available bandwidth. However, if a technician makes a connection error on a patch panel and accidentally creates a loop, the network will be negatively impacted. Generally, disabling STP in a switched network is not worth the risk.

Keep Traffic off the Administrative VLAN and Do Not Have a Single VLAN Span the Entire Network

A Cisco switch typically has a single IP address that binds to a VLAN, known as the administrative VLAN. In this VLAN, the switch behaves like a generic IP host. In particular, every broadcast or multicast packet is forwarded to the CPU. A high rate of broadcast or multicast traffic on the administrative VLAN can adversely impact the CPU and its ability to process vital BPDUs. Therefore, keep user traffic off the administrative VLAN.

Until recently, there was no way to remove VLAN 1 from a trunk in a Cisco implementation. VLAN 1 generally serves as an administrative VLAN, where all switches are accessible in the same IP subnet. Though useful, this setup can be dangerous because a bridging loop on VLAN 1 affects all trunks, which can bring down the whole network. Of course, the same problem exists no matter which VLAN you use. Try to segment the bridging domains using high-speed Layer 3 switches.

Note: As of Cisco IOS Software Release 12.1(11b)E, you can remove VLAN 1 from trunks. VLAN 1 still exists, but it blocks traffic, which prevents any loop possibility.


5.4.8 - Designing an STP for Trouble Avoidance
The diagram depicts the final points of this section.

Keep STP even if it is unnecessary.
Do not disable STP.
STP is not very processor-intensive.
The few BPDU's sent on each link do not reduce bandwidth.
A bridge network without STP can go down in a fraction of a second.

Keep traffic off the administrative V LAN.
A high rate of broadcast or multicast traffic on the administrative V LAN adversely affects the ability of a CPU to process vital BPDU's.

Do not have a single V LAN span the entire network.
V LAN 1 serves as an administrative V LAN, where all switches are accessible in the same IP subnet.
A bridging loop on V LAN 1 affects all trunks and can bring down the network.
Segment the bridging domains using high-speed Layer 3 switches.


5.4.9 Troubleshoot STP Operation

Switch or Link Failure
Before you troubleshoot a bridging loop, you need to know at least these items:

* Topology of the bridge network
* Location of the root bridge
* Location of the blocked ports and the redundant links

5.6.1 Summary

Page 1:

Implementing redundancy in a hierarchical network introduces physical loops that result in Layer 2 issues which impact network availability. To prevent problems resulting from physical loops introduced to enhance redundancy, the spanning-tree protocol was developed. The spanning-tree protocol uses the spanning-tree algorithm to compute a loop-free logical topology for a broadcast domain.

The spanning-tree process uses different port states and timers to logically prevent loops by constructing a loop-free topology. The determination of the spanning-tree topology is constructed in terms of the distance from the root bridge. The distance is determined by the exchange of BPDUs and spanning-tree algorithm. In the process, port roles are determined: designated ports, non-designated ports, and root ports.

Using the original IEEE 802.1D spanning-tree protocol involves a convergence time of up to 50 seconds. This time delay is unacceptable in modern switched networks, so the IEEE 802.1w rapid spanning-tree protocol was developed. The per-VLAN Cisco implementation of IEEE 802.1D is called PVST+ and the per-VLAN Cisco implementation of rapid spanning-tree protocol is rapid PVST+. RSTP reduces convergence time to approximately 6 seconds or less.

We discussed point-to-point and shared link types with RSTP, as well as edge ports. We also discussed the new concepts of alternate ports and backup ports used with RSTP.

Rapid PVST+ is the preferred spanning-tree protocol implementation used in a switched network running Cisco Catalyst switches.


5.6.1 - Summary and Review
In this chapter, you have learned:
STP prevents loops from being formed in a hierarchical network that implements redundant links.
STP uses different port states and timers to prevent loops from occurring.
One switch in the network is designated as the root bridge. The root bridge is determined through an election process where BPDU frames are exchanged between neighboring switches in a broadcast domain.
All other switches in the network use the spanning-tree algorithm to determine their switch port roles. Switch ports closest to the root bridge become root ports. The remaining non-root ports compete for designated or non-designated roles.
Because STP convergence can take up to 50 seconds to complete, RSTP and rapid PVST+ were developed.
RSTP reduces the convergence time to a little over 6 seconds.
Rapid PVST+ adds V LAN support to RSTP. Rapid PVST+ is the preferred implementation used on a Cisco switch network.



5.6.1 - Summary and Review
This is a review and is not a quiz. Questions and answers are provided.
Question One. Which of the following statements are true and which are false?
Answers:
A. Ethernet frames do not have a time to live (TTL). True.

B. Broadcast frames are forwarded out all switch ports, except the originating port. True.

C. In a hierarchical design, redundancy is achieved at the Distribution Layer and Core Layer through additional hardware and alternate paths through the additional hardware. True.

D. Layer 2 loops result in low CPU load on all switches caught in the loop. False.

E. A broadcast storm results when there are so many broadcast frames caught in a Layer 2 loop that all available bandwidth is consumed. True.

F. Most upper layer protocols are designed to recognize or cope with duplicate transmissions. False.

G. Layer 2 loops arise as a result of multiple paths, and STP can be used to block these loops. True.

VirtualLAN_Trunking_Protocol (VTP)

What is VTP?

VTP allows a network manager to configure a switch so that it will propagate VLAN configurations to other switches in the network. The switch can be configured in the role of a VTP server or a VTP client. VTP only learns about normal-range VLANs (VLAN IDs 1 to 1005). Extended-range VLANs (IDs greater than 1005) are not supported by VTP.

Click Play in the figure to view an animation of an overview of how VTP works.

VTP Overview

VTP allows a network manager to makes changes on a switch that is configured as a VTP server. Basically, the VTP server distributes and synchronizes VLAN information to VTP-enabled switches throughout the switched network, which minimizes the problems caused by incorrect configurations and configuration inconsistencies. VTP stores VLAN configurations in the VLAN database called vlan.dat.


In the figure, a trunk link is added between switch S1, a VTP server, and S2, a VTP client. After a trunk is established between the two switches, VTP advertisements are exchanged between the switches. Both the server and client leverage advertisements from one another to ensure each has an accurate record of VLAN information. VTP advertisements will not be exchanged if the trunk between the switches is inactive. The details on how VTP works is explained in the rest of this chapter.



Benefits of VTP

You have learned that VTP maintains VLAN configuration consistency by managing the addition, deletion, and renaming of VLANs across multiple Cisco switches in a network. VTP offers a number of benefits for network managers, as shown in the figure.


VTP Components

There are number of key components that you need to be familiar with when learning about VTP. Here is a brief description of the components, which will be further explained as you go through the chapter.

* VTP Domain-Consists of one or more interconnected switches. All switches in a domain share VLAN configuration details using VTP advertisements. A router or Layer 3 switch defines the boundary of each domain.
* VTP Advertisements-VTP uses a hierarchy of advertisements to distribute and synchronize VLAN configurations across the network.
* VTP Modes- A switch can be configured in one of three modes: server, client, or transparent.
* VTP Server-VTP servers advertise the VTP domain VLAN information to other VTP-enabled switches in the same VTP domain. VTP servers store the VLAN information for the entire domain in NVRAM. The server is where VLANs can be created, deleted, or renamed for the domain.
* VTP Client-VTP clients function the same way as VTP servers, but you cannot create, change, or delete VLANs on a VTP client. A VTP client only stores the VLAN information for the entire domain while the switch is on. A switch reset deletes the VLAN information. You must configure VTP client mode on a switch.
* VTP Transparent-Transparent switches forward VTP advertisements to VTP clients and VTP servers. Transparent switches do not participate in VTP. VLANs that are created, renamed, or deleted on transparent switches are local to that switch only.
* VTP Pruning-VTP pruning increases network available bandwidth by restricting flooded traffic to those trunk links that the traffic must use to reach the destination devices. Without VTP pruning, a switch floods broadcast, multicast, and unknown unicast traffic across all trunk links within a VTP domain even though receiving switches might discard them.


4.2 VTP Operation
4.2.1 Default VTP Configuration

In CCNA Exploration: Network Fundamentals, you learned that a Cisco switch comes from the factory with default settings. The default VTP settings are shown in the figure. The benefit of VTP is that it automatically distributes and synchronizes domain and VLAN configurations across the network. However, this benefit comes with a cost, you can only add switches that are in their default VTP configuration. If you add a VTP-enabled switch that is configured with settings that supersede existing network VTP configurations, changes that are difficult to fix are automatically propagated throughout the network. So make sure that you only add switches that are in their default VTP configuration. You will learn how to add switches to a VTP network later in this chapter.

VTP Versions

VTP has three versions, 1, 2, and 3. Only one VTP version is allowed in a VTP domain. The default is VTP version 1. A Cisco 2960 switch supports VTP version 2, but it is disabled. A discussion of VTP versions is beyond the scope of this course.

The following briefly describes the show VTP status parameters:

* VTP Version-Displays the VTP version the switch is capable of running. By default, the switch implements version 1, but can be set to version 2.
* Configuration Revision-Current configuration revision number on this switch. You will learn more about revisions numbers in this chapter.
* Maximum VLANs Supported Locally-Maximum number of VLANs supported locally.
* Number of Existing VLANs-Number of existing VLANs.
* VTP Operating Mode-Can be server, client, or transparent.
* VTP Domain Name-Name that identifies the administrative domain for the switch.
* VTP Pruning Mode-Displays whether pruning is enabled or disabled.
* VTP V2 Mode-Displays if VTP version 2 mode is enabled. VTP version 2 is disabled by default.
* VTP Traps Generation-Displays whether VTP traps are sent to a network management station.
* MD5 Digest-A 16-byte checksum of the VTP configuration.
* Configuration Last Modified-Date and time of the last configuration modification. Displays the IP address of the switch that caused the configuration change to the database.

4.2.2 VTP Domains

VTP Domains

VTP allows you to separate your network into smaller management domains to help reduce VLAN management. An additional benefit of configuring VTP domains is that it limits the extent to which configuration changes are propagated in the network if an error occurs. The figure shows a network with two VTP domains, cisco2 and cisco3. In this chapter, the three switches, S1, S2, and S3, will be configured for VTP.

A VTP domain consists of one switch or several interconnected switches sharing the same VTP domain name. Later in this chapter, you will learn how VTP-enabled switches acquire a common domain name. A switch can be a member of only one VTP domain at a time. Until the VTP domain name is specified you cannot create or modify VLANs on a VTP server, and VLAN information is not propagated over the network.

VTP Domain Name Propagation

For a VTP server or client switch to participate in a VTP-enabled network, it must be a part of the same domain. When switches are in different VTP domains, they do not exchange VTP messages. A VTP server propagates the VTP domain name to all switches for you. Domain name propagation uses three VTP components: servers, clients, and advertisements.

The network in the figure shows three switches, S1, S2, and S3, in their default VTP configuration. They are configured as VTP servers. VTP domain names have not been configured on any of the switches.

The network manager configures the VTP domain name as cisco1 on the VTP server switch S1. The VTP server sends out a VTP advertisement with the new domain name embedded inside. The S2 and S3 VTP server switches update their VTP configuration to the new domain name.

Note: Cisco recommends that access to the domain name configuration functions be protected by a password. The details of password configuration will be presented later in the course.


4.2.3 VTP Advertising

VTP Frame Structure

VTP advertisements (or messages) distribute VTP domain name and VLAN configuration changes to VTP-enabled switches. In this topic, you will learn about the VTP frame structure and how the three types of advertisements enable VTP to distribute and synchronize VLAN configurations throughout the network.

VTP Frame Encapsulation

A VTP frame consists of a header field and a message field. The VTP information is inserted into the data field of an Ethernet frame. The Ethernet frame is then encapsulated as a 802.1Q trunk frame (or ISL frame). Each switch in the domain sends periodic advertisements out each trunk port to a reserved multicast address. These advertisements are received by neighboring switches, which update their VTP and VLAN configurations as necessary.

VTP Frame Details

Keep in mind that a VTP frame encapsulated as an 802.1Q frame is not static. The contents of the VTP message determines which fields are present. The receiving VTP-enabled switch looks for specific fields and values in the 802.1Q frame to know what to process. The following key fields are present when a VTP frame is encapsulated as an 802.1Q frame:

Destination MAC address- This address is set to 01-00-0C-CC-CC-CC, which is the reserved multicast address for all VTP messages.

LLC field- Logical link control (LLC) field contains a destination service access point (DSAP) and a source service access point (SSAP) set to the value of AA.

SNAP field- Subnetwork Access Protocol (SNAP) field has an OUI set to AAAA and type set to 2003.

VTP header field- The contents vary depending on the VTP message type-summary, subset, or request, but it always contains these VTP fields:

* Domain name- Identifies the administrative domain for the switch.
* Domain name length- Length of the domain name.
* Version- Set to either VTP 1, VTP 2, or VTP 3. The Cisco 2960 switch only supports VTP 1 and VTP 2.
* Configuration revision number- The current configuration revision number on this switch.


VTP message field- Varies depending on the message type.

VTP frames contain the following fixed-length global domain information:

* VTP domain name
* Identity of the switch sending the message, and the time it was sent
* MD5 digest VLAN configuration, including maximum transmission unit (MTU) size for each VLAN
* Frame format: ISL or 802.1Q


VTP frames contain the following information for each configured VLAN:

* VLAN IDs (IEEE 802.1Q)
* VLAN name
* VLAN type
* VLAN state
* Additional VLAN configuration information specific to the VLAN type


Note: A VTP frame is encapsulated in an 802.1Q Ethernet frame. The entire 802.1Q Ethernet frame is the VTP advertisement often called a VTP message. Often the terms frame, advertisement, and message are used interchangeably.

VTP Revision Number

The configuration revision number is a 32-bit number that indicates the level of revision for a VTP frame. The default configuration number for a switch is zero. Each time a VLAN is added or removed, the configuration revision number is incremented. Each VTP device tracks the VTP configuration revision number that is assigned to it.

Note: A VTP domain name change does not increment the revision number. Instead, it resets the revision number to zero.

The revision number plays an important and complex role in enabling VTP to distribute and synchronize VTP domain and VLAN configuration information. To comprehend what the revision number does, you first need to learn about the three types of VTP advertisements and the three VTP modes.



VTP Advertisements

Summary Advertisements
The summary advertisement contains the VTP domain name, the current revision number, and other VTP configuration details.

Summary advertisements are sent:
* Every 5 minutes by a VTP server or client to inform neighboring VTP-enabled switches of the current VTP configuration revision number for its VTP domain
* Immediately after a configuration has been made


Subset Advertisements
A subset advertisement contains VLAN information. Changes that trigger the subset advertisement include:
* Creating or deleting a VLAN
* Suspending or activating a VLAN
* Changing the name of a VLAN
* Changing the MTU of a VLAN
It may take multiple subset advertisements to fully update the VLAN information.


Request Advertisements
When a request advertisement is sent to a VTP server in the same VTP domain, the VTP server responds by sending a summary advertisement and then a subset advertisement.

Request advertisements are sent if:
* The VTP domain name has been changed
* The switch receives a summary advertisement with a higher configuration revision number than its own
* A subset advertisement message is missed for some reason
* The switch has been reset



4.2.4 VTP Modes

VTP Modes Overview

A Cisco switch, configured with Cisco IOS software, can be configured in either server, client, or transparent mode. These modes differ in how they are used to manage and advertise VTP domains and VLANs.

Server Mode

In server mode, you can create, modify, and delete VLANs for the entire VTP domain. VTP server mode is the default mode for a Cisco switch. VTP servers advertise their VLAN configurations to other switches in the same VTP domain and synchronize their VLAN configurations with other switches based on advertisements received over trunk links. VTP servers keep track of updates through a configuration revision number. Other switches in the same VTP domain compare their configuration revision number with the revision number received from a VTP server to see if they need to synchronize their VLAN database.

Client Mode

If a switch is in client mode, you cannot create, change, or delete VLANs. In addition, the VLAN configuration information that a VTP client switch receives from a VTP server switch is stored in a VLAN database, not in NVRAM. Consequently, VTP clients require less memory than VTP servers. When a VTP client is shut down and restarted, it sends a request advertisement to a VTP server for updated VLAN configuration information.

Switches configured as VTP clients are more typically found in larger networks, because in a network consisting of many hundreds of switches, it is harder to coordinate network upgrades. Often there are many network administrators working at different times of the day. Having only a few switches that are physically able to maintain VLAN configurations makes it easier to control VLAN upgrades and to track which network administrators performed them.

For large networks, having client switches is also more cost-effective. By default, all switches are configured to be VTP servers. This configuration is suitable for small scale networks in which the size of the VLAN information is small and the information is easily stored in NVRAM on the switches. In a large network of many hundreds of switches, the network administrator must decide if the cost of purchasing switches with enough NVRAM to store the duplicate VLAN information is too much. A cost-conscious network administrator could choose to configure a few well-equipped switches as VTP servers, and then use switches with less memory as VTP clients. Although a discussion of network redundancy is beyond the scope of this course, know that the number of VTP servers should be chosen to provide the degree of redundancy that is desired in the network.

Transparent Mode

Switches configured in transparent mode forward VTP advertisements that they receive on trunk ports to other switches in the network. VTP transparent mode switches do not advertise their VLAN configuration and do not synchronize their VLAN configuration with any other switch. Configure a switch in VTP transparent mode when you have VLAN configurations that have local significance and should not be shared with the rest of the network.

In transparent mode, VLAN configurations are saved in NVRAM (but not advertised to other switches), so the configuration is available after a switch reload. This means that when a VTP transparent mode switch reboots, it does not revert to a default VTP server mode, but remains in VTP transparent mode.



4.2.5 VTP Pruning

VTP pruning prevents unnecessary flooding of broadcast information from one VLAN across all trunks in a VTP domain. VTP pruning permits switches to negotiate which VLANs are assigned to ports at the other end of a trunk and, hence, prune the VLANs that are not assigned to ports on the remote switch. Pruning is disabled by default. VTP pruning is enabled using the vtp pruning global configuration command. You need to enable pruning on only one VTP server switch in the domain.


VTP Pruning in Action

Recall that a VLAN creates an isolated broadcast domain. A switch floods broadcast, multicast, and unknown unicast traffic across all trunk links within a VTP domain.


VTP Pruning Enabled

The figure shows a network topology that has switches S1, S2, and S3 configured with VTP pruning. When VTP pruning is enabled on a network, it reconfigures the trunk links based on which ports are configured with which VLANs.


4.3 Configure VTP
4.3.1 Configuring VTP


VTP Configuration Guidelines

Now that you are familiar with the functionality of VTP, you are ready to learn how to configure a Cisco Catalyst switch to use VTP. The topology shows the reference topology for this chapter. VTP will be configured on this topology.

VTP Server Switches

Follow these steps and associated guidelines to ensure that you configure VTP successfully:

* Confirm that all of the switches you are going to configure have been set to their default settings.
* Always reset the configuration revision number before installing a previously configured switch into a VTP domain. Not resetting the configuration revision number allows for potential disruption in the VLAN configuration across the rest of the switches in the VTP domain.
* Configure at least two VTP server switches in your network. Because only server switches can create, delete, and modify VLANs, you should make sure that you have one backup VTP server in case the primary VTP server becomes disabled. If all the switches in the network are configured in VTP client mode, you cannot create new VLANs on the network.
* Configure a VTP domain on the VTP server. Configuring the VTP domain on the first switch enables VTP to start advertising VLAN information. Other switches connected through trunk links receive the VTP domain information automatically through VTP advertisements.
* If there is an existing VTP domain, make sure that you match the name exactly. VTP domain names are case-sensitive.
* If you are configuring a VTP password, ensure that the same password is set on all switches in the domain that need to be able to exchange VTP information. Switches without a password or with the wrong password reject VTP advertisements.
* Ensure that all switches are configured to use the same VTP protocol version. VTP version 1 is not compatible with VTP version 2. By default, Cisco Catalyst 2960 switches run version 1 but are capable of running version 2. When the VTP version is set to version 2, all version 2 capable switches in the domain autoconfigure to use version 2 through the VTP announcement process. Any version 1-only switches cannot participate in the VTP domain after that point.
* Create the VLAN after you have enabled VTP on the VTP server. VLANs created before you enable VTP are removed. Always ensure that trunk ports are configured to interconnect switches in a VTP domain. VTP information is only exchanged on trunk ports.


VTP Client Switches

* As on the VTP server switch, confirm that the default settings are present.
* Configure VTP client mode. Recall that the switch is not in VTP client mode by default. You have to configure this mode.
* Configure trunks. VTP works over trunk links.
* Connect to a VTP server. When you connect to a VTP server or another VTP-enabled switch, it takes a few moments for the various advertisements to make their way back and forth to the VTP server.
* Verify VTP status. Before you begin configuring the access ports, confirm that the revision mode and number of VLANs have been updated.
* Configure access ports. When a switch is in VTP client mode, you cannot add new VLANs. You can only assign access ports to existing VLANs.


4.3.2 Troubleshooting VTP Configurations

Troubleshooting VTP Connections

You have learned how VTP can be used to simplify managing a VLAN database across multiple switches. In this topic, you will learn about common VTP configuration problems. This information, combined with your VTP configuration skills, will help you when troubleshooting VTP configuration problems.

Incompatible VTP Versions
VTP versions 1 and 2 are incompatible with each other. Modern Cisco Catalyst switches, such as the 2960, are configured to use VTP version 1 by default. However, older switches may only support VTP version 1. Switches that only support version 1 cannot participate in the VTP domain along with version 2 switches. If your network contains switches that support only version 1, you need to manually configure the version 2 switches to operate in version 1 mode.

VTP Password Issues
When using a VTP password to control participation in the VTP domain, ensure that the password is set correctly on all switches in the VTP domain. Forgetting to set a VTP password is a very common problem. If a password is used, it must be configured on each switch in the domain. By default, a Cisco switch does not use a VTP password. The switch does not automatically set the password parameter, unlike other parameters that are set automatically when a VTP advertisement is received.

Incorrect VTP Domain Name
The VTP domain name is a key parameter that is set on a switch. An improperly configured VTP domain affects VLAN synchronization between switches. As you learned earlier, if a switch receives the wrong VTP advertisement, the switch discards the message. If the discarded message contains legitimate configuration information, the switch does not synchronize its VLAN database as expected.

To avoid incorrectly configuring a VTP domain name, only set the VTP domain name on one VTP server switch. All other switches in the same VTP domain will accept and automatically configure their VTP domain name when they receive the first VTP summary advertisement.

Switches Set to VTP Client Mode
It is possible to change the operating mode of all switches to VTP client. By doing so, you lose all ability to create, delete, and manage VLANs within your network environment. Because the VTP client switches do not store the VLAN information in NVRAM, they need to refresh the VLAN information after a reload.

To avoid losing all VLAN configurations in a VTP domain by accidentally reconfiguring the only VTP server in the domain as a VTP client, you can configure a second switch in the same domain as a VTP server. It is not uncommon for small networks that use VTP to have all the switches in VTP server mode. If the network is being managed by a couple of network administrators, it is unlikely that conflicting VLAN configurations will arise.


Incorrect Revision Number
Even after you have configured the switches in your VTP domain correctly, there are other factors that can adversely affect the functionality of VTP.

The solution to the problem is to reset each switch back to an earlier configuration and then reconfigure the correct VLANs, 10 and 20, on switch S1. To prevent this problem in the first place, reset the configuration revision number on previously configured switches being added to a VTP-enabled network. The figure shows the commands needed to reset switch S4 back to the default revision number.



4.3.3 Managing VLANs on a VTP Server

Managing VLANs on a VTP Server

You have learned about VTP and how it can be used to simplify managing VLANs in a VTP-enabled network. Consider the topology in the figure. When a new VLAN, for example, VLAN 10, is added to the network, the network manager adds the VLAN to the VTP server, switch S1 in the figure. As you know, VTP takes care of propagating the VLAN configuration details to the rest of the network. It does not have any effect on which ports are configured in VLAN 10 on switches S1, S2, and S3.

After you have configured the new VLAN on switch S1 and configured the ports on switches S1, S2, and S3 to support the new VLAN, confirm that VTP updated the VLAN database on switches S2 and S3.


4.5 Chapter Summary
4.5.1 Summary
VTP is a Cisco-proprietary protocol used to exchange VLAN information across trunk links, reducing VLAN administration and configuration errors. VTP allows you to create a VLAN once within a VTP domain and have that VLAN propagated to all other switches in the VTP domain.

There are three VTP operating modes: server, client, and transparent. VTP client mode switches are more prevalent in large networks, where there definition reduces the administration of VLAN information. In small networks, network managers can more easily keep track of network changes, so switches are often left in the default VTP server mode.

VTP pruning limits the unnecessary propagation of VLAN traffic across a LAN. VTP determines which trunk ports forward which VLAN traffic. VTP pruning improves overall network performance by restricting the unnecessary flooding of traffic across trunk links. Pruning only permits VLAN traffic for VLANs that are assigned to some switch port of a switch on the other end of a trunk link. By reducing the total amount of flooded traffic on the network, bandwidth is freed up for other network traffic.

Sunday, October 11, 2009

Benefits of a Hierarchical Network

Benefits of a Hierarchical Network

There are many benefits associated with hierarchical network designs.

Scalability

Hierarchical networks scale very well. The modularity of the design allows you to replicate design elements as the network grows. Because each instance of the module is consistent, expansion is easy to plan and implement. For example, if your design model consists of two distribution layer switches for every 10 access layer switches, you can continue to add access layer switches until you have 10 access layer switches cross-connected to the two distribution layer switches before you need to add additional distribution layer switches to the network topology. Also, as you add more distribution layer switches to accommodate the load from the access layer switches, you can add additional core layer switches to handle the additional load on the core.

Redundancy

As a network grows, availability becomes more important. You can dramatically increase availability through easy redundant implementations with hierarchical networks. Access layer switches are connected to two different distribution layer switches to ensure path redundancy. If one of the distribution layer switches fails, the access layer switch can switch to the other distribution layer switch. Additionally, distribution layer switches are connected to two or more core layer switches to ensure path availability if a core switch fails. The only layer where redundancy is limited is at the access layer. Typically, end node devices, such as PCs, printers, and IP phones, do not have the ability to connect to multiple access layer switches for redundancy. If an access layer switch fails, just the devices connected to that one switch would be affected by the outage. The rest of the network would continue to function unaffected.

Performance

Communication performance is enhanced by avoiding the transmission of data through low-performing, intermediary switches. Data is sent through aggregated switch port links from the access layer to the distribution layer at near wire speed in most cases. The distribution layer then uses its high performance switching capabilities to forward the traffic up to the core, where it is routed to its final destination. Because the core and distribution layers perform their operations at very high speeds, there is less contention for network bandwidth. As a result, properly designed hierarchical networks can achieve near wire speed between all devices.

Security

Security is improved and easier to manage. Access layer switches can be configured with various port security options that provide control over which devices are allowed to connect to the network. You also have the flexibility to use more advanced security policies at the distribution layer. You may apply access control policies that define which communication protocols are deployed on your network and where they are permitted to go. For example, if you want to limit the use of HTTP to a specific user community connected at the access layer, you could apply a policy that blocks HTTP traffic at the distribution layer. Restricting traffic based on higher layer protocols, such as IP and HTTP, requires that your switches are able to process policies at that layer. Some access layer switches support Layer 3 functionality, but it is usually the job of the distribution layer switches to process Layer 3 data, because they can process it much more efficiently.

Manageability

Manageability is relatively simple on a hierarchical network. Each layer of the hierarchical design performs specific functions that are consistent throughout that layer. Therefore, if you need to change the functionality of an access layer switch, you could repeat that change across all access layer switches in the network because they presumably perform the same functions at their layer. Deployment of new switches is also simplified because switch configurations can be copied between devices with very few modifications. Consistency between the switches at each layer allows for rapid recovery and simplified troubleshooting. In some special situations, there could be configuration inconsistencies between devices, so you should ensure that configurations are well documented so that you can compare them before deployment.

Maintainability

Because hierarchical networks are modular in nature and scale very easily, they are easy to maintain. With other network topology designs, manageability becomes increasingly complicated as the network grows. Also, in some network design models, there is a finite limit to how large the network can grow before it becomes too complicated and expensive to maintain. In the hierarchical design model, switch functions are defined at each layer, making the selection of the correct switch easier. Adding switches to one layer does not necessarily mean there will not be a bottleneck or other limitation at another layer. For a full mesh network topology to achieve maximum performance, all switches need to be high-performance switches, because each switch needs to be capable of performing all the functions on the network. In the hierarchical model, switch functions are different at each layer. You can save money by using less expensive access layer switches at the lowest layer, and spend more on the distribution and core layer switches to achieve high performance on the network.

Followers