Sunday, December 9, 2007

Local Area Networks

Introduction

Local area networks (LANs) were developed in the 1980s, starting with Ethernet and quickly followed by token ring and others. They enable members of an organization to share databases, applications, files, messages, and resources such as servers, printers, and Internet connections. The promised benefits of LANs are often too compelling to ignore: improved productivity, increased flexibility, and cost savings. These benefits sparked the initial move from mainframe-based data centers to a more distributed model of computing, which continues today. The impetus for this “downsizing” can come from several directions, including:

  • Senior management, who are continuously looking for ways to streamline operations to improve financial performance.

  • End users, who are becoming more technically proficient, resent the gatekeeper function of data center staff, and want immediate access to data that they perceive as belonging to them. In the process, they benefit from being more productive and in their ability to make better and faster decisions, which comes from increased job autonomy.

  • IT management, who are responding to budget cutbacks or scarce resources, and are looking for ways to do more using less powerful computers.

From their own perspectives, LANs represent the most feasible solution. With PCs now well entrenched in corporate offices, individuals, work groups, and departments have become acutely aware of the benefits of controlling information resources and of the need for data coordination. In becoming self-sufficient and being able to share resources via LANs, users have become empowered to better control their own destinies within the organization. For instance, they can increase the quality and timeliness of their decision making, execute transactions faster, and become more responsive to internal and external constituencies—all without the need to confront a gatekeeper in the data center.

In many cases, this arrangement has the potential of moving accountability to the lowest common point in the organization, where many end users think it properly belongs. This scenario also has the potential of peeling back layers of bureaucracy that have traditionally stood between users and centralized resources. IT professionals eventually discovered that it was in their best interest to gain control over LANs, enabling them to justify their existence within the organization by using their technical expertise to keep LANs secure and operating at peak performance. Further, there was the need to assist users, who were not technically savvy. Rendering assistance helped companies get the most out of their technology investments.

Ethernet

Ethernet is a type of LAN that uses a contention-based method of access to allow computers to share resources, send files, print documents, and transfer messages. The Ethernet LAN originated as a result of the experimental work done by Xerox Corporation at its Palo Alto Research Center (PARC) in the mid-1970s and quickly became a de facto standard with the backing of Digital Equipment Corp. (DEC) and Intel Corp. Xerox licensed Ethernet to other companies that developed products based on the specification issued by the three companies. Much of the original Ethernet design was incorporated into the 802.3 standard adopted in 1980 by the Institute of Electrical and Electronic Engineers (IEEE).

Ethernet is contention-based, meaning that stations compete with each other for access to the network, a process that is controlled by a statistical arbitration scheme. Each station “listens” to the network to determine if it is idle. Upon sensing that no traffic is currently on the line, the station is free to transmit. If the network is already in use, the station backs off and tries again. If multiple stations sense that the network is idle and transmit at the same time, a “collision” occurs and each station backs off to try again at staggered intervals. This media access control scheme is known as carrier sense multiple access with collision detection (CSMA/CD).

1.2.1 Frame Format

The IEEE 802.3 standard defines a multi-field frame format, which differs only slightly from that of the original version of Ethernet, known as “pure” Ethernet (see Figure 1.1):

  • Preamble. The frame begins with an 8-byte field called a preamble, which consists of 56 bits having alternating 1 and 0 values. These are used for synchronization and to mark the start of the frame. The same bit pattern used in the pure Ethernet preamble is used in the IEEE 802.3 preamble, which includes the 1-byte start frame delimiter field.

  • Start frame delimiter. The IEEE 802.3 standard specifies a start frame delimiter field, which is really a part of the preamble. This is used to indicate the start of a frame.

  • Address fields. The destination address field identifies the station(s) that are to receive the frame. The source address field identifies the station that sent the frame. If addresses are locally assigned, the address field can be either 2 bytes (16 bits) or 6 bytes (48 bits) in length. A destination address can refer to one station, a group of stations, or all stations. The original Ethernet specifies the use of 48-bit addresses, while IEEE 802.3 permits either 16- or 48-bit addresses.

  • Length count. The length of the data field is indicated by the 2-byte count field. This IEEE 802.3-specified field is used to determine the length of the information field when a pad field is included in the frame.

  • Pad field. To detect collisions properly, the frame that is transmitted must contain a certain number of bytes. The IEEE 802.3 standard specifies that if a frame being assembled for transmission does not meet this minimum length, a pad field must be added to bring it up to that length.

  • Type field. Pure Ethernet does not support length and pad fields, as does IEEE 802.3. Instead, 2 bytes are used for a type field. The value specified in the type field is only meaningful to the higher network layers and was not defined in the original Ethernet specification.

  • Data field. The data field of a frame is passed by the client layer to the data link layer in the form of 8-bit bytes. The minimum frame size is 72 bytes, while the maximum frame size is 1,526 bytes, including the preamble. If the data to be sent uses a frame that is smaller than 72 bytes, the pad field is used to stuff the frame with extra bytes. In defining a minimum frame size, there are less problems to contend with in handling collisions. If the data to be sent uses a frame that is larger than 1,526 bytes, it is the responsibility of the higher layers to break it into individual packets in a procedure called “fragmentation.” The maximum frame size reflects practical considerations related to adapter card buffer sizes and the need to limit the length of time the medium is tied up in transmitting a single frame.

  • Frame check sequence. A properly formatted frame ends with a frame check sequence, which provides the means to check for errors. When the sending station assembles a frame, it performs a cyclical redundancy check (CRC) calculation on the bits in the frame. The sending station stores the results of the calculation in the 4-byte frame check sequence field before sending the frame. At the receiving station, an identical CRC calculation is performed and a comparison made with the original value in the frame check sequence field. If the two values do not match, the receiving station assumes that a transmission error has occurred and requests that the frame be retransmitted. In pure Ethernet, there is no provision for error correction; if the two values do not match, notification that an error has occurred is simply passed to the client layer.

1.2.2 Media Access Control

Several key processes are involved in transmitting data across the network; among them, data encapsulation/decapsulation and media access management, which are performed by the media access control (MAC) sublayer of Open Systems Interconnection’s (OSI) data link layer.

Data Encapsulation/Decapsulation Data encapsulation is performed at the sending station. This process entails adding information to the beginning and end of the data unit to be transmitted. The data unit is received by the MAC sublayer from the logical link control (LLC) sublayer. The added information is used to perform the following tasks:

  • Synchronize the receiving station with the signal;

  • Indicate the start and end of the frame;

  • Identify the addresses of sending and receiving stations;

  • Detect transmission errors.

The data encapsulation function is responsible for constructing a transmission frame in the proper format. The destination address, source address, type and information fields are passed to the data link layer by the client layer in the form of a packet. Control information necessary for transmission is encapsulated into the offered packet. The CRC value for the frame check sequence field is calculated, and the frame is constructed.

When a frame is received, the data decapsulation function performed at the receiving station is responsible for recognizing the destination address, performing error checking, and then removing the control information that was added by the data encapsulation function at the sending station. If no errors are detected, the frame is passed up to the LLC sublayer.

Specific types of errors are checked in the decapsulation process, including whether the frame is a multiple of 8 bits or exceeds the maximum packet length. The address is also checked to determine whether the frame should be accepted and processed further. If it is, a CRC value is calculated and checked against the value in the frame check sequence field. If the values match, the destination address, source address, type and data fields are passed to the client layer. What is passed to the client station is the packet in its original form.

Media Access Management

The method used to control access to the transmission medium is known as media access management, which is responsible for several functions, starting with collision handling, which are defined by the IEEE 802.3 standard for contention networks. There are two collision handling schemes: one for detection and one for avoidance.

  • With detection (i.e., CSMA/CD), collisions occur when two or more frames are offered for transmission at the same time, which triggers the transmission of a sequence of bits called a “jam.” This is the means whereby all stations on the network recognize that a collision has occurred. At that point, all transmissions in progress are terminated. Retransmissions are attempted at calculated intervals. If there are repeated collisions, a process called “backing off” is used, which involves increasing the retransmission wait time following each successive collision.

  • With collision avoidance [i.e., carrier sense multiple access with collision avoidance (CSMA/CA)], the line is monitored for the presence or absence of a signal (carrier), as in CSMA/CD. But with collision avoidance, a broadcast is issued to the network notifying other stations that a data transmission is about to occur. While CSMA/CA is effective at avoiding collisions on a network, it has an additional overhead requirement that CSMA/CD does not. This results in CSMA/CA increasing network traffic because it has to broadcast the intent of the station to transmit before any real data is put onto the cable.

On the receiving side, the management function is responsible for recognizing and filtering out fragments of frames that resulted from a transmission that was interrupted by a collision. Any frame that is less than the minimum size is assumed to be a fragment that was caused by a collision.

Fast Ethernet

The 100BaseT is the IEEE standard for providing 100-Mbps Ethernet performance and functionality over ubiquitously available UTP wiring. Like 10BaseT Ethernet, this standard specifies a star topology. The need for 100 Mbps came about as a result of the emergence of data-intensive applications and technologies such as multimedia, groupware, imaging, and the explosive growth of high-performance database software packages on PC platforms. All of these tax today’s client-server environments and demand even greater bandwidth for improved response time.

1.4.1 Compatibility

Also known as Fast Ethernet, 100BaseT uses the same contention-based MAC method—CSMA/CD—that is at the core of IEEE 802.3 Ethernet. The Fast Ethernet MAC specification simply reduces the “bit time”—the time duration of each bit transmitted—by a factor of 10, enabling a 10-fold boost in speed over 10BaseT. Fast Ethernet’s scaled CSMA/CD MAC leaves the remainder of the MAC unchanged. The packet format, packet length, error control, and management information in 100BaseT are all identical to those used in 10BaseT.

Since no protocol translation is required, data can pass between 10BaseT and 100BaseT stations via a hub equipped with a 10/100-Mbps bridge module. Both technologies are also full-duplex capable, meaning that data can be sent and received at the same time. This compatibility enables existing LANs to be inexpensively upgraded to the higher speed as demand warrants.

1.4.2 Media Choices

To ease the migration from 10BaseT to 100BaseT, Fast Ethernet can run over Category 3, 4 or 5 UTP cables, while preserving the critical 100-meter (330-foot) segment length between hubs and end stations. The use of fiber allows even more flexibility with regard to distance. For example, the maximum distance from a 100BaseT repeater to a fiber-optic bridge, router, or switch using fiber-optic cable is 225 meters (742 feet). The maximum fiber distance between bridges, routers, or switches is 450 meters (1,485 feet). The maximum distance between a fiber bridge, router, or switch—when the network is configured for half-duplex—is 2 km (1.2 miles). By interconnecting repeaters with other internetworking devices, large well-structured networks can be easily created with 100BaseT. The type of media used to implement 100-Mbps Ethernets is summarized as follows:

  • 100BaseTX: A two-pair system for data grade (EIA 568 Category 5) UTP and shielded twisted-pair (STP) cabling.

  • 100BaseT4: A four-pair system for both voice and data grade (Category 3, 4, or 5) UTP cabling.

  • 100BaseFX: A multimode two-strand fiber system.

Together, the 100BaseTX and 100BaseT4 media specifications cover all cable types currently in use in 10BaseT networks. Since 100BaseTX, 100BaseT4, and 100BaseFX systems can be mixed and interconnected through a hub, users can retain their existing cabling infrastructure while migrating to Fast Ethernet.

The 100BaseT also includes a media-independent interface (MII) specification, which is similar to the 10-Mbps AUI. The MII provides a single interface, which can support external transceivers for any of the 100BaseT media specifications.

Unlike other high-speed technologies, Ethernet has been installed for over 20 years in business, government, and educational networks. The migration to 100-Mbps Ethernet is made easier by the compatibility of 10BaseT and 100BaseT technologies, making it unnecessary to alter existing applications for transport at the higher speed. This compatibility allows 10BaseT and 100BaseT segments to be combined in both shared and switched architectures, allowing network administrators to apply the right amount of bandwidth easily, precisely, and cost-effectively. Fast Ethernet is managed with the same tools as 10BaseT networks, and no changes to current applications are required to run them over the higher speed 100BaseT network.

Gigabit Ethernet

Ethernet is a highly scalable LAN technology. Long available in two versions—10-Mbps Ethernet and 100-Mbps Fast Ethernet—the next version standardized by the IEEE offer another order of magnitude increase in bandwidth. Offering a raw data rate of 1,000 Mbps or 1 Gbps, the so-called Gigabit Ethernet uses the same frame format and size as previous Ethernet technologies. It also maintains full compatibility with the huge installed base of Ethernet nodes through the use of LAN hubs, switches, and routers.

Gigabit Ethernet supports full-duplex operating modes for switch-to-switch and switch-to-end-station connections and half-duplex operating modes for shared connections using repeaters and the CSMA/CD access method. Figure 1.2 illustrates the functional elements of Gigabit Ethernet.

The initial efforts in the standards process drew heavily on the use of Fibre Channel and other high-speed networking components. Fibre Channel encoding/decoding integrated circuits and optical components were readily available and are specified and optimized for high performance at relatively low costs. The first implementations of Gigabit Ethernet employed Fibre Channel’s high-speed, 780-nm (short wavelength) optical components for signaling over optical fiber and 8B/10B encoding/decoding schemes for serialization and deserialization. Fibre Channel technology operating at 1.063 Gbps was enhanced to run at 1.250 Gbps, thus providing the full 1,000-Mbps data rate for Gigabit Ethernet. Link distances—up to 2 km over single-mode fiber and up to 550 meters over 62.5-micron multimode fiber—were specified as well.

In mid-1999, the IEEE and the Gigabit Ethernet Alliance formally ratified the standard for Gigabit Ethernet over copper. The IEEE 802.3ab standard defines Gigabit Ethernet operation over distances of up to 100 meters (330 feet) using four pairs of Category 5 balanced copper cabling. The standard adds a Gigabit Ethernet physical layer to the original 802.3 standard, allowing for the higher speed over the existing base of Category 5 UTP wiring. It also allows for auto-negotiation between 100-Mbps and 1,000-Mbps equipment. Table 1.1 summarizes Gigabit Ethernet standards for various media.

Table 1.1: A Summary of Gigabit Ethernet Standards for Various Media Specification Transmission Facility Purpose Source: IEEE 802.3z Gigabit Task Force.

Specification

Transmission Facility

Purpose

1000BaseLX

Long-wavelength laser transceivers

Support links of up to 550m of multimode fiber or 3,000m of single-mode fiber

1000BaseSX

Short-wavelength laser transceivers operating on multimode fiber

Support links of up to 300m using 62.5-micron multimode fiber or links of up to 550m using 50-micron multimode fiber

1000BaseCX

STP cable spanning no more than 25m

Support links among devices located within a single room or equipment rack

1000BaseT

UTP cable

Support links of up to 100m using four-pair Category 5 UTP

The initial applications for Gigabit Ethernet will be for campuses or buildings requiring greater bandwidth between routers, switches, hubs and repeaters, and servers. Examples include switch-to-router, switch-to-switch, switch-to-server, and repeater-to-switch links.

No comments: