This article was written by me for the Connect magazine and was published in issue Connect 04/10, here I publish it with the kind permission of the editors.
Structured cabling and active components for a larger network
This whole issue is quite extensive and variable. Everything depends on our requirements, what parameters the resulting network must meet. And also, so that the management and further expansion is convenient for us. It is certainly clear to everyone that a manufacturing plant, a law firm or an IT company (which develops software or provides training) has different requirements for the network. And, of course, it also depends on the scope, i.e., the number of connected workstations, servers and other devices, as well as the geographical area it covers.
From the above, it is clear that it is not possible to write a universal recipe for designing a computer network. It is necessary to draw on the knowledge of the issue and experience. We can use certain general recommendations or a proposal from large companies producing active components, but I do not consider it a dogma. It is important to adapt everything to the needs and current environment. One of the decisive parameters is also the price/performance ratio.
In this article, I will provide my practical experience from the design, construction and operation of a larger network. I will try to mention other possibilities in individual areas, besides those used in our project, but the list will certainly not be exhaustive. The current network was designed for an IT company that has higher requirements for the network environment, with over 150 servers, more than 400 workstations and dozens of other network devices connected to the network. In terms of area, it only concerns the headquarters, i.e., a multi-story building. We will deal with structured cabling and active components.
List of requirements
At the beginning, it is certainly necessary to write down the main requirements for the resulting network. The main ones are probably performance parameters and availability, in other words, the speed of ports (and switching/routing hardware) and redundancy. It is also necessary to specify the requirements for the ports, i.e., the number of sockets per workplace, the location of special sockets, from which the total number of ports and the port density per location (room/floor/building) will be determined.
Furthermore, we can already determine other preferences at the beginning. For example, a preferred brand of active components (either because we have trained administrators or for other reasons), requirements for communication and server racks, the physical topology of the network, port power, etc. We will address the individual points further in the text.
In certain places, we must divide the design into two parts and evaluate them separately. One area is the connection of end stations and network devices. The second is the data center and the connection of servers (here we will only deal with the LAN network and will not deal with SAN and others).
Network structure
When looking at the network from the perspective of active components, the term "hierarchical design" is often used, which has three layers. The central, most powerful part is called the core layer, below it is the distribution layer and the lowest is the access layer. Such a design has many advantages, mainly its clarity and scalability.
In practice, however, we more often encounter only a two-layer model, because for medium-sized companies in our environment it is fully sufficient. So we will also consider using a core that will take care of dividing the network into VLANs, routing between them, traffic control (application of ACLs) and connecting switches from the access layer. The second layer will be the access layer, which has a high port density, connects clients and ensures security at the port level. This layer also includes wireless network access points. Because we have a two-layer model, we will also connect servers to the access layer, but it can be more powerful in the data center.

At the beginning, we need to decide what connection speed we will provide to individual users, i.e., how fast the ports on the access switch will be. Nowadays, we can consider the long-proven and widely used 100 Mbps or the modern and now widely supported speed of 1 Gbps (almost all of today's workstations and servers are equipped with a network card of this speed). Because we are equipping an IT company that is thinking about the future, we unequivocally choose the gigabit speed.
Communication between individual switches is called the backbone. Logically, this communication should have a higher speed than the end ports, so that the uplink port of the access switch does not become a bottleneck. Today, considering the prices, we have little choice and we choose a backbone speed of 10 Gbps. Another option would be to use aggregation of multiple gigabit ports.
The general network structure certainly includes connection to the Internet (preferably to two independent providers), how it will be implemented, how it will be connected to the network, etc. But we don't have space for this area here.
Structured cabling
Based on the previous paragraphs, we must choose structured cabling for both the backbone connections and for client stations and servers. For clients, we immediately decide on UTP metallic cables (unshielded twisted pair) with standard RJ-45 connectors. Now, however, it depends on the cabling category. For gigabit speed, we have the currently still most widespread category 5e, which has the most favorable price. Further, the newer and better category 6 (or the new improved 6a) and even category 7, which, however, requires special connectors.
In our project, we decided to invest in the better category 6, which also supports 10Gbps Ethernet in the future. And currently, it is the most widespread option for new buildings. We choose the standard 1000Base-T protocol.
When we look at the connection of servers, so far the connection using optics (we do not consider the SAN network) or 10 Gbps is extremely expensive and used little. So we will use the same solution as for end stations, 1 Gbps speed and category 6 cabling.
We still have to decide on the interconnection of individual switches. We decided on a speed of 10 Gbps, for which the 10GBase-T protocol exists today, which transmits 10 Gbps over metallic cable, but optical cables are predominantly used for the backbone. They are also resistant to electromagnetic interference. For this, we will use the 10GBase-SR protocol.
Wiring for workstations
Structured cabling also includes other elements. Clear are patch cables, which are used to connect end devices or to interconnect on the patch panel. Further, telecommunication sockets, which are placed on the wall (directly or in a skirting board) or on the floor (from experience I recommend using sockets on the wall wherever possible). And patch panels (for optics and metallic), which are located in the rack and the cables leading from the user's socket are terminated on them. All these elements must meet the chosen category 6 together.
It is also important to decide on the physical topology of the cabling and active components. One option is a central location to which all wiring from the entire building will be routed. The advantage is that we have all the active components in one place. But for larger networks, this is not feasible. Therefore, we must choose the second option, where we have a technical room for each floor, where the wiring of the given floor is routed. In this room, we have a rack that contains patch panels and switches for the given floor. Here, the administrator connects the user's sockets to the network. Subsequently, we have one central location, the communication server room, where the uplinks from the access switches of the individual floors are led, and where the core layer active components are located.

From a practical point of view, it is also important to think about how the components will be placed in the communication rack. The common approach is to place the patch panels on one half and the switches on the other. Then, however, a large mess arises with the routing of patch cables, while if we interleave the patch panel and the switch, we can use short patch cables and achieve greater order.
Wiring for servers
A slightly different area is the connection of the data center, where the servers are located. In each server rack, we can place a switch or only a patch panel and concentrate the switches in another location in the communication rack (similar to the workstations).
We chose the option where in the communication server room there are additional racks where the patch panels from the server racks are led and the switches for the servers are placed. The uplinks from these switches are led to the core element, just like the uplinks from the individual technical rooms. This solution will allow us, if necessary, to bring communication to the server that does not pass through the switches (by interconnection through individual patch panels).
Active components
In our project, it was already decided in advance that we will choose active components from Cisco. From our previous decisions, it follows that we need active components on the access layer that have gigabit metallic ports plus a ten-gigabit optical uplink. The central core must be able to handle 10 Gbps optical links. We also calculated how many sockets we will want to place in the building, and therefore how many ports we will need.
A few years ago, when the network was designed, there was only one option for access switches that would meet the given parameters. This was the Catalyst 3750E, which can have two X2 ports that we can populate with a 10 Gbps X2 module. If we decided to compromise on some parameters and choose, for example, the aggregation of 5 gigabit ports into the uplink, we could use the significantly cheaper (and for this purpose common) Catalyst 2960G switches. Nowadays, we already have the new Catalyst 2960S series, which contains 2 SFP+ ports that we can populate with a 10 Gbps SFP+ module.
The network design also included the deployment of IP telephony and a wireless network, so the switches located on the floors are models with powered ports (Power over Ethernet). Switches used to connect servers do not need this feature, so cheaper models without PoE are selected.
From the specified number of sockets, we end up using five to six 48-port switches per floor. We could also consider using twice as many 24-port switches, but we won't discuss this issue here. Six switches represent the routing of six optical cables from each floor to the central office and the corresponding number of ports on the central component. But thanks to the fact that we have chosen the Catalyst 3750E series of switches, we have the option to stack (use a stack) these switches. This means that they are interconnected with a special high-speed cable. This simplifies management, the entire stack behaves like a single switch, and we can use fewer uplink paths.
We have already determined the access switches and now we have to decide on the central component. Given the number of ports, performance and requirements, the modular Catalyst 6500-E is available. This is a very powerful but also expensive component. We will not discuss the choice of model, supervisor (control component) or the modules used here.
Availability - redundancy
So far, we have not addressed the availability of the entire network, i.e., what will happen if a component or a path fails. We will achieve higher availability through redundancy of individual components. Doubling everything, however, would also mean double the cost, which we certainly don't like. So we have to find a compromise between price and availability.
Going from top to bottom, we first have the most important element, the core. The best solution, but also the most expensive, is to use two Catalyst 6500s and route all links twice, one to each switch. The second option is to take advantage of the fact that the 6500 series is modular and designed for high availability. We can double all the elements (power supply, supervisor and port modules). But this will only save the chassis itself compared to the solution with two 6500s. We decided to assume that the 6500 is a very high-quality model and build only a cheaper backup solution for emergencies. That is, not to use a comparably powerful component, but only a Catalyst 3750, which is passive and would take over the role of the central component only in the event of a 6500 failure.
As a backup backbone, we also did not choose 10 Gbps optics, but 1 Gbps metallic. Additionally, from the individual floors, where there is a stack of six switches, we routed the primary optics twice and the backup metallic twice.
At the end-user stations, availability is not critical, so in the event of a failure at the access layer, a manual reconnection is performed. For servers, the situation is more important, so the dual-home method is used, where the server is connected through two network cards to two different switches. In addition, the connection does not have to be active/passive, but we can use link aggregation (in Cisco terminology Etherchannel) and also increase the speed.
Veľmi poučný a zaujímavý článok, ďakujem ;-)
<quote>Do obecné struktury sítě jistě patří i připojení k internetu (nejlépe ke dvěma nezávislým poskytovatelům), jak bude provedeno, jak se napojí do sítě, apod. Na tuto oblast, zde ale nemáme prostor.<quote>
Velmi by potesilo pokracovane zamerane prave na tuto oblast :-)
Dakujem pekne spracovane.
Rovnako ako beko by som velmi uvital mozno v dalsich clankoch rozpisanie jednotlivych faz navrhovania siete.
Cela jedna kapitola by mohla byt o bezpecnosti, kde je vhodne vyuzit aku politiku. Multihoming, vyuzitie Catalyst 4500 namiesto viacerych c2960s.
Skratka super clanok tesim sa na dalsie ;-)
Nejsem si jisty ale CAT6 myslim 10Gbps nepodporuje. Proto radeji CAT6a.
respond to [4]Xdrop:
CAT6 delka kabelu 55m a u CAT6a delka kabelu 100m.