Section 1.5: Switched Campus Network Design
Cisco promotes a campus network design based on a modular approach. In this design approach, each layer of the hierarchical network model can be broken down into basic functional modules or blocks. These modules can then be sized appropriately and connected together, while allowing for future scalability and expansion. A building block approach to network design. Campus networks based on the modular approach can be divided into basic elements. These are:
• Switch blocks, which are access layer switches connected to the distribution layer devices; and
• Core blocks, which are multiple switch blocks connected together with possibly 5500, 6500, or 8500 switches.
Within these fundamental campus elements, there are other contributing variables that can be added to the network. These are:
• Server Farm blocks, which are groups of network servers on a single subnet
• Enterprise Edge blocks are centralized services to which the enterprise network is responsible for providing complete access, together with their related access and distribution switches.
• Network Management blocks are a set of network management resources with their accompanying access and distribution switches
• Service Provider Edge blocks are multiple connections to an ISP or multiple ISPs
1.5.1: The Switch Block
The switch block is a combination of layer 2 switches and layer 3 routers. The layer 2 switches connect users in the wiring closet into the access layer and provide 10/100 Mbps dedicated connections. From here, the access layer switches will connect into one or more distribution layer switches, which will be the central connection point for all switches coming from the wiring closets. The distribution layer device is either a switch with an external router or a multi-layer switch. The distribution layer switch will then provide layer 3 routing functions, if needed.
The distribution layer router will prevent broadcast storms that could happen on an access layer switch from propagating throughout the entire internetwork. Thus, the broadcast storm would be isolated to only the access layer switch in which the problem exists.
188.8.131.52: Switch Block Sizing
Switch block sizing at the access layer is based on the quantity of users or the port density. Distribution layer sizing is based on the quantity of access layer switches that are passed into a distribution mechanism. When sizing the distribution layer, the following should be considered:
• Traffic types and behaviours
• Quantity of users connected to access layer switches
• Layer 3 switching abilities on the distribution layer
• The size of Spanning Tree domains
• The physical confines of VLANs quantity and break up or or broadcast
Designing a switch block should be based essentially on traffic types and behaviours, and the extent of workgroups. Because a switch block can be too large or too small, the ability to downsize a switch block should be catered for. A switch block is too large when multicast traffic reduces speed of the switch block switches, or the distribution layer multilayer switches turn into traffic blockages.
Access switches are able to contain one or many redundant links to distribution layer mechanisms. This enables traffic to be load balanced across redundant links using redundant gateways.
1.5.2: The Core Block
If you have two or more switch blocks, you need a core block which will be responsible for transferring data to and from the switch blocks as quickly as possible. You can build a fast core with a frame, packet, or cell (ATM) network technology. Typically, have two or more subnets configured on the core network for redundancy and load balancing.
Switches can trunk on a certain port or ports. This means that a port on a switch can be a member of more than one VLAN at the same time. However, the distribution layer will handle the routing and trunking for VLANs, and the core is only a pass-through once the routing has been performed. Because of this, core links will not carry multiple subnets per link. A Cisco 6500 or 8500 switch is recommended at the core. Even though one switch might be sufficient to handle the traffic, Cisco recommends two switches for redundancy and load balancing purposes.
184.108.40.206: Collapsed Core
A collapsed core is defined as one switch device performing both core and distribution layer functions. The collapsed core is typically found in smaller campus networks where a separate core layer is not warranted. Although the distribution and core layer functions are performed in the same device, keeping these functions distinct and properly designed remain of importance. In the collapsed core design, each access layer switch has a redundant link to each distribution/core layer switch and each access layer switch may support more than one VLAN. The distribution layer routing is the termination for all ports. In a collapsed core network, Spanning-Tree Protocol (STP) blocks the redundant links to prevent loops. Hot Standby Routing Protocol (HSRP) can provide redundancy in the distribution layer routing. It can keep core connectivity if the primary routing process fails.
220.127.116.11: Dual Core
A dual core connects two or more switch blocks in a redundant fashion. Each connection would be a separate subnet. Redundant links connect the distribution layer portion of each switch block to each of the dual core switches. In the dual core, each distribution switch has two equal-cost paths to the core, providing twice the available bandwidth. The distribution layer routers would have links to each subnet in the routing tables, provided by the layer 3 routing protocols. If a failure on a core switch takes place, convergence time will not be an issue. HSRP can be used to provide quick cutover between the cores.
18.104.22.168: Core Size
The dual core is made up of redundant switches, and is bounded and isolated by Layer 3 devices. Routing protocols determine paths and maintain the operation of the core. You must pay attention to the overall design of the routers and routing protocols in the network. As routing protocols propagate updates throughout the network, network topologies might be undergoing change. The size of the network, i.e., the number of routers, then affects routing protocol performance, as updates are exchanged and network convergence takes place. Large campus networks can have many switch blocks connected into the core block. Layer 2 devices are used in the core with usually only a single VLAN or subnet across the core. Therefore, all route processors connect into a single broadcast domain at the core.
Each route processor must communicate with and keep information about each of its directly connected peers. Thus, most routing protocols have practical limits on the number of peer routers that can be connected. Because two equal-cost paths from each distribution switch into the core, each router forms two peer relationships with every other router. Therefore, the actual maximum number of switch blocks that can be supported is half the number of distribution layer routers. In the case of dual core design, the equalcost paths must lead to isolated VLANs or subnets if a routing protocol supports two equal-cost paths. Thus, two equal-cost paths are used in a dual core design with two Layer 2 switches. Likewise, a routing protocol that supports six equal-cost paths requires that the six distribution switch links be connected to exactly six Layer 2 devices in the core. This gives six times the redundancy and six times the available bandwidth into the core.
22.214.171.124: Core Scalability
As the number of switch blocks increases, the core block must also be capable of scaling without needing to be redesigned. Traditionally, hierarchical network designs have used Layer 2 switches at the access layer, Layer 3 devices at the distribution layer, and Layer 2 switches at the core. This design is called a Layer 2 Core has been very cost effective and has provided high-performance connectivity between switch blocks in the campus. As the network grows, more switch blocks must be added to the network, which in turn requires more distribution switches with redundant paths into the core. The core must then be scaled to support the redundancy and the additional campus traffic load.
Providing redundant paths from the distribution switches into the core block allows the Layer 3 distribution switches to identify several equal-cost paths across the core. If the number of core switches must be increased for scalability, the number of equal-cost paths can become too much for the routing protocols to handle. Because the core block is formed with Layer 2 switches, the Spanning-Tree Protocol (STP) is used to prevent bridging loops. If the core is running STP, then it can compromise the high-performance connectivity between switch blocks. The best design on the core is to have two switches without STP running. You can do this only by having a core without links between the core switches.
126.96.36.199: Layer 3 Core
Layer 3 switching can also be used in the core to fully scale the core block for large campus networks. This approach overcomes the problems of slow convergence, load balancing limitations, and router peering limitations. In a Layer 3 core, the core switches can have direct links to each other. Because of Layer 3 functionality, the direct links do not impose any bridging loops.
With a Layer 3 core, the path determination intelligence occurs in both the distribution and core layers, allowing the number of core devices to be increased for scalability. Redundant paths also can be used to interconnect the core switches without concern for Layer 2 bridging loops, eliminating the need for STP. If you have only Layer 2 devices at the core layer, the STP will be used to stop network loops if there is more than one connection between core devices. The STP has a convergence time of over 50 seconds, and if the network is large, this can cause an enormous amount of problems if it has just one link failure. However, STP would not be implemented in the core if the core has Layer 3 devices. Instead, routing protocols, which have a much faster convergence time than STP, could be implemented. In addition, the routing protocols can load balance with multiple equal-cost links. STP is discussed in more detail in Section 4.2.
Router peering problems are also overcome as the number of routers connected to individual subnets is reduced. Distribution devices are no longer considered peers with all other distribution devices. Instead, a distribution device peers only with a core switch on each link into the core. This advantage becomes especially important in very large campus networks involving more than 100 switch blocks. However, Layer 3 devices are more expensive than Layer 2 devices. The Layer 3 devices also need to have switching latencies comparable to their Layer 2 counterparts. Using a Layer 3 core also adds additional routing hops to cross-campus traffic.
1.5.3: Additional Building Blocks
Additional resources can be assembled into building blocks, and can be located and arranged in the same manner as common switch block modules.
188.8.131.52: Server Farm Blocks
Enterprise servers comprising of company e-mail, intranet services, mainframe systems and Enterprise Resource Planning (ERP) applications normally belong to a server farm. These enterprise resources are accessed by most of the connected users. The whole server farm can be structured into a switch block with its own layer of access switches. These access switches are then uplinked to dual distribution switches that are connected into the core layer by means of redundant high-speed links. Dual-homing the servers occurs when each server has dual network connections - one to each distribution switch.
184.108.40.206: Enterprise Edge Blocks
Campus networks connect to service providers at the edge of the campus network to gain access to these service providers' external resources or services. The resources are utilized by the whole network and can be grouped into a single switch block that is connected to the core network. These resources can comprise of internet access, WAN access, e-commerce and remote access and VPNs.
220.127.116.11: Network Management Blocks
Network management resources and policy management applications such as system logging servers and authentication, authorization, and accounting (AAA) servers, can be grouped into a single network management switch block. These resources access application servers, network devices and user connectivity and actions. This single network management switch block has a distribution layer that links into the core switches. Redundant links and redundant switches are usually utilized to ensure that the resources are always available.
18.104.22.168: Service Provider Edge Blocks
A service provider has its own hierarchical network design and can be viewed as an enterprise or campus network. A campus network contains an edge block and connects to each service provider's network edge from there.